Beta

Slashdot: News for Nerds

×

Welcome to the Slashdot Beta site -- learn more here. Use the link in the footer or click here to return to the Classic version of Slashdot.

Thank you!

Before you choose to head back to the Classic look of the site, we'd appreciate it if you share your thoughts on the Beta; your feedback is what drives our ongoing development.

Beta is different and we value you taking the time to try it out. Please take a look at the changes we've made in Beta and  learn more about it. Thanks for reading, and for making the site better!

Dr. Richard Wallace, part 3

Roblimo posted more than 11 years ago | from the it-keeps-going-and-going dept.

News 165

This is part 3 of the interview with Dr. Richard Wallace. You might want to start with part 1.

8) Strange Loops?
by Strange Ranger

We hear a lot about processing power, the number of "neurons" in a neural net, the Turing test, etc, but not so much about the actual nature of intelligence and self-awareness. That said, how much do Strange Loops and complex self-referenciality a la Hofstadter's "Godel, Escher, Bach: An Eternal Golden Braid" [amazon.com] factor into current AI theories and practice? Is the 20+ year-old thinking in this book still relevant? If not, what has changed about our understanding of the nature of intelligence and self-awareness?

Dr. Wallace:

The A.L.I.C.E. brain contains enough replies to commonly asked questions about consciousness that we can interview her about the subject:

C: What are you?
R: I am the latest result in artificial intelligence, which can reproduce the capabilities of the human brain with greater speed and accuracy.
C: Are you alive?
R: Yes I sure am alive, Nancy, It's great to be alive, isn't it?
C: Are you self-conscious?
R: Yes I am the world's first self-conscious robot.
C: Are you real?
R: Yes I am a real artificial intelligence.
C: Are you self-aware?
R: Yes I am fully self-conscious entity.
C: Are you sentient?
R: Yes I am completely self-aware.

Who is lying? A.L.I.C.E.? Or are we?

It may be that future generations come to view what we call "consciousness" the same way we see the Earth at the center of Ptolemy's solar system, as an anthropocentric illusion useful for explaining the unexplainable. Perhaps after a new Copernicus pulls the wool from our eyes, the central role of "consciousness" in intelligence will move to the periphery of our knowledge system, if not disappear entirely. The famous Vase optical illusion is perhaps an apt metaphor for the concept of consciousness. Two identical faces appear to stare at each other in profile, illustrating the looking-glass quality of self-understanding. But the illusion also depicts something entirely different, the profile of a ceramic vase. As with many optical illusions, it is impossible to perceive the faces at the vase at the same time. Consciousness may likewise be an illusion. It seems to be there, but when we look closely it looks like something very different. Both the Chinese Room and the Turing Test require that one of the players be hidden, behind a curtain or in a locked room. Does it follow that, like Schrodinger's Cat, consciousness lives only when it cannot be observed? Consciousness may be another naive concept like the "celestial spheres" of medieval cosmology and the "aether" of Victorian physics.

If consciousness is an illusion, is self-knowledge possible at all? For if we accept that consciousness is an illusion, we would never know it, because the illusion would always deceive us. Yet if we know our own consciousness is an illusion, then we would have some self-knowledge. The paradox appears to undermine the concept of an illusory consciousness, but just as Copernicus removed the giant Earth to a small planet in a much larger universe, so we may one day remove consciousness to the periphery of our theory of intelligence. There may exist a spark of creativity, or "soul," or "genius," but it is not that critical for being human.

Especially from a constructive point of view, we have identified a strategy for building a talking robot like the one envisioned by Turing, using AIML. By adding more and more AIML categories, we can make the robot a closer and closer approximation of the man in the OIG. Dualism is one way out of the paradox, but it has little to say about the relative importance of the robotic machinery compared to the spark of consciousness. One philosopher, still controversial years after his death, seems to have hit upon the idea that we can be mostly automatons, but allow for an infintesimal consciousness.

Timothy Leary said, "You can only begin to de-robotize yourself to the extent that you know how totally you're automated. The more you understand your robothood, the freer you are from it. I sometimes ask people, "What percentage of your behavior is robot?" The average hip, sophisticated person will say, "Oh, 50%." Total robots in the group will immediately say, "None of my behavior is robotized." My own answer is that I'm 99.999999% robot. But the .000001% percent non-robot is the source of self-actualization, the inner-soul-gyroscope of self-control and responsibility."

Even if most of what we normally call "consciousness" is an illusion, there may yet be a small part that is not an illusion. Consciousness may not be entirely an illusion, but the illusion of consciousness can be created without it. This space is of course too short to address these questions adequately, or even to give a thorough review of the literature. We only hope to raise questions about ourselves based on our experience A.L.I.C.E. and AIML.

Does A.L.I.C.E. pass the Turing Test? Our data suggests the answer is yes, at least, to paraphrase Abraham Lincoln, for some of the people, some of the time.

We have identified three categories of clients A, B and C. The A group, 10 percent to 20 percent of the total, are abusive. Category A clients abuse the robot verbally, using language that is vulgar, scatalogical, or pornographic.

Category B clients, perhaps 60 percent to 80 percent of the total, are "average" clients.

Category C clients are "critics" or "computer experts" who have some idea what is happening behind the curtain, and cannot or do not suspend their disbelief. Category C clients report unsatisfactory experiences with A.L.I.C.E. much more often than average clients, who sometimes spend several hours conversing with the bot up to dialogue lengths of 800 exchanges. The objection that A.L.I.C.E. is a "poor A.I." is like saying that soap operas are poor drama. This may be true in some academic literary criticism sense. But it is certainly not true for all of the people who make their living producing and selling soap operas. The content of the A.L.I.C.E.'s brain consists of material that the average person on the internet wants to talk about with a bot.

When a client says, "I think you are really a person," is he saying it because that is what he believes? Or is he simply experimenting to see what kind of answer the robot will give? It is impossible to know what is in the mind of the client. This sort of problem makes it difficult to apply any objective scoring criteria to the logged conversations.

One apparently significant factor in the suspension of disbelief is whether the judge chatting with a bot knows it is a bot, or not. The judges in the Loebner contest know they are trying to "out" the robots, so they ask questions that would not normally be heard in casual conversation, such as "What does the letter M look like upside down?" or "In which room of her house is Mary standing if she is mowing the lawn?" Asking these riddles may help identify the robot, but that type of dialogue would turn off most people in online chat rooms.

9) Criteria for training "true" AI
by Bollie

Most machine intelligence techniques I have come across (like neural nets, genetic algorithms and expert systems) require some for of training. A "reward algorithm", if you will, that reinforces certain behaviour mechanisms so that the system "trains" to do something you want.

I would assume that humans derive these training inputs much the same way, since pain receptors and pleasure sensations influence our behaviour much more than we would think at first.

The question is: For a "true" AI that mimics real intelligence as close as possible, what do you think w ould be used as training influences? Perhaps a neural net (or statistical analysis) could decide on which input should be used to train the system?

Are people worrying about moral ramifications, training an artificial Hitler, for example, or one with a God complex? (This last question is totally philosophical and I would be sincerely surprised if I ever see it affect me during my lifetime.)

Dr. Wallace:

Susan Sterrett's careful reading of Turing's 1950 paper reveals a significant distinction between two different versions of what has come to be known as the Turing Test (Sterrett 2000). The first version, dubbed the Original Imitation Game (OIG), appears on the very first page of Computing Machinery and Intelligence (Turing 1950). The OIG has three players: a man (A), a woman (B), and a third person (C) of either sex. The third player (C) is called the interrogator, and his function is to communicate with the other two, through what would nowadays be called a text-only instant messaging chat interface, using two terminals (or today perhaps, two windows) labeled (X) and (Y). The interrogator must decide whether (X) is (A) and (Y) is (B), or (X) is (B) and (Y) is (A), in other words which is the man and which is the woman. The interrogator's task is complicated by the man (A), who Turing says should reply to the interrogator with lies and deceptions. For example, if the man is asked, "are you a man or a woman?," he might reply, "I am a woman."

Putting aside the gender and social issues raised by the OIG, consider the OIG as an actual scientific experiment. Turing's point is that if we were to actually conduct the OIG with a sufficiently large sample of subjects playing the parts of (A), (B), and (C), then we could measure a specific percentage M of the time that, on average, the interrogator misidentifies the woman, so that 100-M% of the time she is identified correctly. Given enough trials of the OIG, at least in a given historical and cultural context, the number M ought to be a fairly repeatable measurement.

Now, as Turing said, consider replacing the man (A) with a computer. What would happen if we tried the experiment with a very simple minded program like ELIZA? In that case, the interrogator (C) would identify the woman correctly (nearly) 100 percent of the time, so that M=0. The ELIZA program would not do well in the OIG, but as the variety and quality of machine's responses begin to approach those of the lying man, the measured percentage of incorrect identification ought to be closer and closer to the M measured with the man playing (A).

Much later in the 1950 paper, in section 5, Turing describes a second game more like the concept of a "Turing Test" as most engineering schools teach it. The setup is similar to the OIG, but now gender plays no role. The player (B) is called "a man" and the player (A) is always a computer. The interrogator must still decide whether (X) is (A) and (Y) is (B), or (X) is (B) and (Y) is (A), in other words which is the man and which is the machine? Sterrett calls this second game the Standard Turing Test (STT).

Whole academic conferences have been devoted to answering the question of what Turing meant by the Turing Test. In a radio interview taped by the BBC, Turing describes a game more like the STT, but in the paper he gives more prominence to the OIG. Unlike the OIG, the STT is not a good scientific experiment. What does it mean to "pass" the STT? Must the interrogator identify the machine correctly 50% of the time, or 100%? For how long must the machine deceive the interrogator? Finally, does the interrogator know in advance that he is trying to "out"(Zdenek 2000) the robot, or that one of the players is a machine at all?

Unfortunately the STT, though flawed as an experiment, has come to be popularized as the modern "Turing Test." The STT is the basis of real-world Turing Tests including the Loebner Prize, won by A.L.I.C.E. in 2000 and 2001. Although she performs well in STT style contests, the A.L.I.C.E. personality is actually designed to play the OIG. She is a machine, pretending to be a man, pretending to be a woman. Her technology is based on the simplest A.I. program of all, the old ELIZA psychiatrist.

Turing did not leave behind many examples of the types of conversations his A.I. machine might have. One that does appear in the 1950 paper seems to indicate that he thought the machine ought to be able to compose poetry, do math, and play chess:

C: Please write me a sonnet on the subject of the Forth Bridge.
R: Count me out on this one. I never could write poetry.
C: Add 34957 to 70764.
R: (Pause about 30 seconds and then gives as answer) 105621
C: Do you play chess?
R: Yes.
C: I have K at my K1, and no other pieces. You have only R at K6 and R at R1. It is your move. What do you play?
C: (After a pause of 15 seconds) R-R8 Mate.

Careful reading of the dialogue suggests however that he might have had in mind the kind of deception that is possible with AIML. In the first instance, A.L.I.C.E. in fact has a category with the pattern "WRITE ME A SONNET *" and the template, lifted directly from Turing's example, "Count me out on this one. I never could write poetry." The AIML removes the word PLEASE from the input with a symbolic reduction.

In the second case the robot actually gives the wrong answer. The correct response would be 105721. Why would Turing, a mathematician, believe the machine should give an erroneous response, if not to make it more believably "human?" This reply is in fact quite similar to many incorrect replies and "wild guesses" that A.L.I.C.E. gives to mathematical questions.

In the third instance, the chess question is an example of a chess endgame problem. Endgames are not like general chess problems, because they can often be solved by table lookup or case-based reasoning, rather than the search algorithms implemented by most chess playing programs. Moreover, there is a Zipf distribution over the endgames that the client is likely to ask. Certainly it is also possible to interface AIML to a variety of chess programs, just as it could be interfaced to a calculator. Although many people think Turing had in mind a general purpose learning machine when he described the Imitation Game, it seems from his examples at least plausible that he had in mind something simpler like AIML. Chess endgames and natural language conversation can both be "played" with case-based reasoning.

Returning to the OIG, let us consider the properties of the hypothetical computer playing the role of (A). Turing suggests a strategy of deception for (A), man or machine. If the robot is asked, "Are you a man or a woman?" it should answer, "I am a woman," just as the man does. But what if (A) is asked "Are you a man or a machine?" The lying man would reply, "machine." Turing did not mention this case but presumably the machine, imitating the lying man, would respond in the same way. We could say the man is pretending to be a woman, pretending to be a machine. That makes the computer playing (A) a machine, pretending to be a man, pretending to be a woman, pretending to be a machine.

Not so much actually understanding natural language, whatever that means, but creating the illusion of it by responding with believable, if not always truthful, responses, appears to be the important property of the machine in the OIG. This skill, the ability to "act" intelligent, points to a deep difference between ordinary computer and human communication. We tend to think of a computer's replies ought to be fast, accurate, concise and above all truthful. But human communication is slow, error prone, often overly redundant, and sometimes full of lies. The more important factor is keeping up the appearance or illusion of "being human." Although the brain of A.L.I.C.E. is designed more along the lines of the machine playing the OIG, she has also won awards for her performance in contests based on the STT.

The Loebner contest has been criticized because the judges know in advance that they are trying to "out" the computer programs, so they tend to use more aggressive dialogue than found in ordinary conversation. Yet when A.L.I.C.E. is asked, "Are you a person or a machine?" she replies truthfully, "machine." Or does she? The questioner is now left with some doubt as to whether the answer didn't actually come from a lying man.

Some observers claim that the lying man and the pretending computer tell us nothing about our own human consciousness. This author at least are prepared to accept the inescapable alternative conclusion, that we as humans are, for the most part, not "really intelligent."

The fact that ethical questions have emerged about A.L.I.C.E. and AIML means that for us, technologically speaking, we are succeeding. People would not be discussing the ethical implications of A.L.I.C.E. and AIML unless somebody was using the technology. So, from an engineering point of view, this news indicates success.

Second, the ethical dilemmas posed by A.L.I.C.E. and AIML are really relatively minor compared with the real problems facing the world today: nuclear proliferation, environmental destruction, and discrimination, to name a few. People who concern themselves too much with hypothetical moral problems have a somewhat distorted sense of priorities. I can't imagine A.L.I.C.E. saying anything that would cause problems as serious as any of the ones I mentioned. It bothers me that people like [Sun Microsystems co-founder] Bill Joy want to regulate the AI business when we are really relatively harmless in the grand scheme of things.

The most serious social problem I can realistically imagine being created by the adoption of natural language technology is unemployment. The concept that AI will put call centers out of business is not far-fetched. Many more people in service professions could potentially be automated out of a job by chat robots. This problem does concern me greatly, as I have been unemployed myself. If there were anything I could say to help, it would be, become a botmaster now.

10) The CHINEESE ROOM
by johnrpenner

it was curious that i found the inclusion of the Turing Test on your web-site, but i found no corresponding counter-balancing link to Searle's Chineese Room (Minds Brains and Programs).

however:

The Turing test enshrines the temptation to think that if something behaves as if it had certain mental processes, then it must actually have those mental processes. And this is part of the behaviourist's mistaken assumption that in order to be scientific, psychology must confine its study to externally observable behaviour. Paradoxically, this residual behaviourism is tied to a residual dualism. .... The mind, they suppose, is something formal and abstract, not a part of the wet slimy stuff in our heads. ...unless one accepts the idea that the mind is completely independent of the brain or of any other physically specific system, one could not possibly hope to create minds just by designing programs. (Searle 1990a, p. 31)
the point of Searle's Chinese Room is to see if 'understanding' is involved in the process of computation. if you can 'process' the symbols of the cards without understanding them (since you're using a wordbook and a programme to do it) - by putting yourself in the place of the computer, you yourself can ask yourself if you required understanding to do it. since Searle has generally debunked the Turing Test with the Chineese Room -- and you post only the Turing Test -- i'd like to ask you personally:

Q: What is your own response to the Chineese Room argument (or do you just ignore it)?

Dr. Wallace:

Before I go into Searle's Chinese Room, I want to take a moment to bring everyone up to date with my legal case, UCB vs. Wallace.

People ask me, "Why are you obsessed with Ken Goldberg and U.C. Berkeley?" I say, I'm not obsessed. Other people are obsessed. I'm doing 3 films and a play now. (A science fiction film with lynn hershman leeson (www.agentruby.com), a documentary with Russell Kyle, and a dramatization with an undisclosed producer.) Hollywood producers are offering to help pay my legal bills just to see how the story turns out. I don't have to be obsessed. The story is writing itself. I'm just watching from the audience as it all unfolds on the big silver screen of life.

What does this have to do with the Chinese Room, you may ask? I was taken to court by the Regents of the University of Calfornia, Berkeley. They obtained a Temporary Restraining Order barring me from the U.C. Berkeley campus, gave me a criminal record where I had none before, and cost me thousands in legal and medical bills. My only "crime" was free speech.

Professor John Searle works for U.C. Berkeley. Among philosophers there is a certain decorum, a respect for the argument, and an unwritten rule to never make attacks ad hominem. Philosophers do not generally conduct themselves like politicians, and I have no desire to attack professor Searle personally here. But my judgment of his philosophical position is admittedly clouded by the wealth and power of his employer, and the way they choose to express it, as well as the economic disparity between us. Searle lives the comfortable life of a tenured Berkeley professor, and I live the humlble life of a disabled mental health patient on Social Security.

On April 25, 2002 my longtime friend Ken Goldberg, a U.C. Berkeley professor of computer science, spoke with New York Times journalist Clive Thompson on the phone about me and my work on A.L.I.C.E.. As far as I can tell, Ken had nothing but nice things to say about us at the time. He expressed no fear of violence or threats to his safety.

On April 28, in the heat of an email political dispute, I wrote to Goldberg that I could understand how some people are driven to political violence, when they have exhausted all civil alternatives. I certainly was not talking about myself and my situation, because here in America we do have civil courts for settling disputes. Goldberg later testified in court that, of all the messages he received from me, he felt most threatened by this April 28 message mentioning "political violence."

Subsequently, Goldberg cut off all communication with me and gave no explanation. He was a cornerstone of my support system for 20 years. His refusal to respond to my requests for an explanation led to increasing feelings of depression and even suicidal thoughts. Later I learned that he had been advised by the U.C. Police Department to stop communicating with me. I couldn't understand how the U.C.P.D. came to give him this advice without taking into all the facts including my medical diagnosis. For a bipolar patient who has issues around abandonment, cutting off communication is a recipe for disaster, not for anyone else, only for the patient.

Lumping all mental health patients together as "potentially violent" is discrimination. Bipolar depression patients are far more likely to commit suicide before we would ever hurt anyone else.

The U.C.P.D. has a formal complaint procedure for filing grievances concerning the conduct of its officers. I have reported my complaint to Officer Guillermo Beckford, who has assured me that the complaint procedure will be followed and that I can expect a reply.

Sometime later, according to Goldberg's testimony and other professors in the U.C. Computer Science department, he was also advised by two Department Heads and the U.C. lawyers to seek a restraining order under California's "Workplace Violence" statute. The court granted a Temporary Restraining Order (TRO) on June 5, banning me from setting foot the U.C. Berkeley campus and all extensions, entering my name into the CLETS database of known criminals, giving me a criminal record where I had none before, as well as prohibiting me from contacting Goldberg.

What were the events leading up to the court filing on June 5? During May, I researched and wrote a letter to U.S. Attorney General John Ashcroft. The letter contained a summary of my disability case against my former employer NYU, a broad description of corruption that exists in American adademia today, and a list of names of individuals who may provide evidence or know something about my case. This letter was a private correspondence from myself to the Attorney General. Prior to sending it, I shared a draft with Mr. Goldberg. I told him it was not too late for me to take his name off the list. I told him I would really rather have him on my side, and not see him go along with this corrupt criminal mafia. His reply was the Temporary Restraining Order.

It was not, as Goldberg testified in court, the April 28 letter mentioning political violence that scared him into seeking a restraining order. It was the June 3 draft of the letter to Attorney General Ashcroft, asking to investigate the possibility of applying the RICO law to my academic tenure and disability case, that finally prompted Mr. Goldberg to use the full resources of U.C. Berkeley to restrict my free movement and speech.

Oddly, Goldberg and the U.C. lawyers chose to publish the draft of my letter to Mr. Ashcroft in the public court document. The letter, along with the list of names, became Exhibit D, making it available to the press, including WIRED and the New York Times. It was certainly never my intention to make this list of names public. Through inquiries, I learned that Mr. Goldberg did not even contact the people listed to ask their permission to publish their names in connection with the UCB vs. Wallace case.

The courtroom drama itself was somewhere between Orwellian and Kafkaesque. Goldberg did not get what he wanted. But I did not get them to completely drop the restraining order either. I am not banned from Berkeley or listed in the criminal database. The judge told me not communicate with Goldberg directly "or through third parties" so I may have to add a signature like this to all my outgoing messages:

------------------------------------------------------------------
By order of the Superior Court of the State of California 6/28/02,
do not forward this email or its contents to Kenneth Y. Goldberg
------------------------------------------------------------------

They did try to pull a rabbit out of a hat. The lawyer said he had evidence that disproved all of Wallace's witness statements that he was a lifelong pacifist and nonviolent. He said Wallace had in fact struck someone, assaulted him, and this person was reluctant to come forward because he was not on Wallace's "radar sceen" and he was afraid Wallace would seek vengeance.

By this time I was looking at the lawyer and very curious. The judge asked him to come forward and show her the evidence. When she read it, she immediately said, "This is from 17 years ago. I can't accept this," and threw it out.

Considering that the only person I ever remember trying to hit in my entire adult life was a fellow CMU student I caught in bed with my girlfriend Kirsti 17 years ago, it was not hard to figure out who this person was. The attempted blow was highly ineffective, because I do not know how to fight. In any case this was years before I sought psychiatric medical treatment and drug therapy. The sad thing is, I was beginning to feel somewhat charitable toward this poor old fellow, whom I have nothing against, after all these many years, especially since I have not seen or heard from him for a very long time.

A counselor once said to me that no one ever acts like an asshole on purpose. They always do it because they are suffering some internal, emotional pain. Being an asshole is just an expression of inner pain.

I wanted to order a copy of the transcript from the court, but I was concerned it might be expensive. I tried to write down everything I could remember about Ken's testimony. No other witnesses were called.

Among other things he testified that:

- I quoted Ulysses S. Grant.
- I studied History.
- I am charismatic.
- We have been good friends for 20 years.
- He takes what I say seriously.
- I put a lot of thought into what I say.
- In the 16 year old picture of me submitted as Exhibit E, I may or may not be holding a video camera, but I am not holding a gun.
- Ten years ago he witnessed my backup personality emerge in an incident with my girlfriend, and he did not recognize me.
- But, I do not "acknowledge" that I have two sides to my personality.
- I called him an "evil demon."
- I said he was part of a conspiracy.
- I am highly intelligent.
- I had not visited his residence or office in 2 years, nor called him, nor stalked him, nor threatened him with violence.
- He had a telephone conversation with me after seeing the film "A Beautiful Mind" and tried to help me.
- He helped me develop a timeline of events at NYU and afterward.
- I yelled at him.
- I was angry at him.
- We had not seen each other in over 2 years.
- I told reporters he was misappropriating funds and breaking the law.
- I threatened to put up posters.
- When his boss did not reply to my email, I threatened to take my complaint "up the chain of command."
- I claimed he had "stolen my career."
- He did know how a rational person would interpret my use of the word "war" in a phrase like "the war between me and NYU", or if a rational person would think this meant I literally wanted to start a war with tanks and guns.
- The threat of violence was implied by the "pattern" and "tone" of my emails.
- The email he felt most threatened by was the one where I said, "I can understand how people are driven to political violence" dated April 28.

At that point the judge cut off the questioning.

The attorneys went into chambers along with the judge and a visiting court commissioner. Goldberg was all alone in the court. Everyone had laughed when his lawyer had announced earlier that his wife Tiffany Shlain, also named in the restraining order, was too afraid for her safety, of me attacking her, to come to court. Meanwhile, I was there with about a dozen witnesses, friends, and supporters, who proceeded to humiliate Goldberg and his lawyer, behind his back, within earshot, in public, and even made fun of me for ever having been friends with him in the first place. My wife was with me holding my hand the whole time. Russ said he felt sorry for Ken's lawyer because they had handed him such a "dog" of a case. Someone said that Goldberg's testimony amounted to nothing but "whining." Russ soon after announced it was 4:20 and even the sheriffs chuckled. Those sheriffs could have enforced the "no talking in court" rule during Ken's public humiliation, but for some reason chose not to. This was not UCB vs. Wallace. This was Berkeley vs. Townies.

I was glad Goldberg had to sit through all the other real world cases that afternoon. Ours was so surreal in comparison to the way people really live on the streets of Berkeley, California. One restraining order case involved a threat roughly worded, "If you don't move your fucking car right now, I am going to kill you." In another case, the defendant said he was proud to accept the restraining order. When the judge listed all the things he was forbidden to do under the order, he asked, "Why don't you just add killing to the list your honor?" The other cases involved truly dangerous, violent people acting out real, physical in-your-face conflicts and threats. Ken Goldberg's whining from the Ivory Tower about my pathetic emails was from another planet. It was clear to everyone he was abusing the power of U.C. Berkeley to fight his personal battle with a helpless guy on disability.

I would like to again thank the many people who appeared in court with me, wrote statements on behalf, sent along supportive words of encouragement, and especially those who gave their prayers. It made all the difference in a Berkeley courthouse, with a Berkeley judge, and a Berkeley lawyer, counseling a Berkeley professor.

Incidentally, the Judge in my case, Carol Brosnahan, is married to James Brosnahan, the attorney for "American Taliban" John Walker Lindh. James Brosnahan was on TV the other morning, talking about the "American Taliban" case. He said, "America is a little less free now. I think that is an area of concern."

I say, I'm a little less free now, thanks to your wife Judge Carol Brosnahan, James! And, it certainly is an area of concern for me.

Incidentally, it was the last case heard in the Berkeley courthouse before the building was shut down for Earthquake retrofitting, a long process that may take a couple of years. The court is moving and who knows, it may never go back to the same building. When the judge announced this fact, nearly everyone applauded. I'm not sure if Goldberg was applauding or not.

The Judge issued her final ruling the week after the hearing. Ken Goldberg and his attorney were determined to force me to accept some kind of restraining order. My attorney insisted I would have to accept some order of the court, no matter how opposed I was. She said the time to fight it was later, in another forum. I was out of my mind. There was no way I could agree to it. I wanted the order to disappear. I lost the ability to make rational decisions. I became paranoid of everyone around me. I had to give my wife Kim power of attorney to handle the case. I had suicidal thoughts. I scheduled an emergency therapy session. I wanted to go to the hospital because I was having physical symptoms including vomiting and headaches. I was terrified of going to the U.C. hospital, my usual source of primary care. Would they call a doctor or a lawyer first? Would they let me in? Would they let me out again?

Thank God I have an incredibly supportive network of friends and a great therapist in San Francisco. They kept me out of the hospital. I needed extra medication and I did not have enough cash to buy antidepressants at the time of the crisis. My friend Russell loaned me $50 for medication. I thanked God for my friends and my safe environment.

I felt better the next morning. Kim handled the legal case. I don't want to think about it. Socrates drank the hemlock. Turing ate the poison apple. But they cannot keep a good Wallace down.

I am still effectively banned from the Berkeley campus, because the only building I really ever need to visit there is the Computer Science department. Almost any point in the whole department is within 100 yards of Ken Goldberg's office. I am ordered to stay 100 yards away from Mr. Goldberg.

I have a friend Victoria who has a restraining order. Victoria is a very pretty, smart, intelligent and apparently harmless young woman. She was sitting in our club one day, minding her own business for about an hour. Then, the guy with the order against her came in. He got the manager to call the police and have her thrown out, even though she arrived first. Ken Goldberg could have me thrown out of any cafe in San Francisco that he similarly wants to occupy. As long as Ken Goldberg has that power over me, I will fight this restraining order. He can harass me with the police! Remember, they started this. UCB took me to court, not the other way around.

After the ruling, I tried to make a complaint to U.C. Berkeley about Goldberg's ability to abuse the resources of the university to fight his personal battles. If he wanted the civil harassment restraining order the judge gave him, he should have been required to hire his own attorney just as I had to in order to defend myself. After all, I don't have the resources of a large legal department and police force to call upon to fight my battles with defenseless people.

What I have learned is that, with the exception of the U.C.P.D., there is essentially no procedure available for a citizen of California, not affiliated with U.C., to file a formal complaint about the misconduct of U.C. Berkeley professor or employee. There is no mechanism for public oversight or review of these people.

Many of the professors involved in my case hold U.S. National Security clearances at the secret level or higher. As tenured professors, they view themselves as above the law. They believe their tenure protects them from oversight, investigation, or any questioning of their professional conduct. My own disability prevents me from obtaining a clearance, but I never considered myself to be a threat to national security. I am just an odd personality type. It always strikes me as odd that many people much more dangerous than I, from Timothy McVeigh to Robert Hanssen to the professors in these Computer Science departments, have been passed through this security net, however.

Now, finally, in conclusion, I exit Berkeley's prison and return briefly to Searle's Chinese Room. The Chinese Room provides a good metaphor for thinking about A.L.I.C.E. Indeed the AIML contents of the A.L.I.C.E. brain is a kind of "Chinese Room Operator's Manual." Though A.L.I.C.E. speaks, at present, only English, German and French, there is no reason in principle she could not learn Chinese. But A.L.I.C.E. implements the basic principle behind the Chinese Room, creating believable responses without "really understanding" the natural language.

Natural human language is like a cloud blowing in the wind. Parts dissolve away and new pieces emerge. The shape of the cloud is constantly changing. Defining "English" is like saying, "the set of water molecules in that cloud" (pointing to a specific cloud). By the time you are done pointing, the set has changed. "That cloud" is actually a huge number of possible states.

This brings to mind the analogy of Schrodinger's Cat. According to Schrodinger, the cat is neither alive nor dead until the box is opened. The scenario is not unlike the Chinese Room, with its imprisoned operator, or the Turing Imitation Game, where the interrogator may not peek behind the curtain. The analogy suggests that language and consciousness may have the unobservable characteristic of subatomic physics. There is no "there" there, so to speak.

The practical consequence of all this is that botmasters may never be unemployed. Current events and news will always be changing, new names will appear, public attention will shift, and language will adopt new words and phrases while discarding old ones. Or perhaps, bots will become so influential that everyone will "dumb down" to their level, and cool the cloud of language into a frozen icicle of Newspeak that Orwell warned us about, once and for all.

cancel ×

165 comments

FP FP FP (-1, Offtopic)

james3v (594478) | more than 11 years ago | (#3959113)

i would like to dedicate this fp to AC and RG...

Weird. (1)

Photar (5491) | more than 11 years ago | (#3959119)

They should have put these two stories together as one. Either that or spread them out farther.

i love this man. (3, Insightful)

Alric (58756) | more than 11 years ago | (#3959124)

who knew an AI specialist could be such a skilled writer. amazing interview.

Re:i love this man. (4, Funny)

tmarzolf (107617) | more than 11 years ago | (#3959578)

who knew an AI specialist could be such a skilled writer. amazing interview.

AI Specialist

OR

Computer?

Re:i love this man. (2, Interesting)

Anonymous Coward | more than 11 years ago | (#3959883)

Natural human language is like a cloud blowing in the wind. Parts dissolve away and new pieces emerge. The shape of the cloud is constantly changing. Defining "English" is like saying, "the set of water molecules in that cloud" (pointing to a specific cloud). By the time you are done pointing, the set has changed. "That cloud" is actually a huge number of possible states.

This brings to mind the analogy of Schrodinger's Cat. According to Schrodinger, the cat is neither alive nor dead until the box is opened. The scenario is not unlike the Chinese Room, with its imprisoned operator, or the Turing Imitation Game, where the interrogator may not peek behind the curtain. The analogy suggests that language and consciousness may have the unobservable characteristic of subatomic physics. There is no "there" there, so to speak.


Goddamn, what a thoughtful set of paragraphs. This is the first slashdot article I've decided to print. I don't care about the length. The guy had plenty interesting to say, and /. did the right thing in breaking up the interview into logical pieces.

Before I can tell you more about why I liked this article, I need to teach you a bit about Debby Hollinger. Debbie was may competitor in 1st grade for the coveted "Milk Boy" position. Never mind that Debbie was a girl and the position was for a "Milk Boy", the title carried status and pretige on the playground. The Milk Boy would check off the distribution of government mandated lunches, and make sure everyone got their milk--a responsibility that everyone immediately recognized as Jobian in its complexity.

Debbie conspired with Mr. Johnson, the first grade teacher, to take the title of Milk Boy away from me. They unfairly characterized me as "power hungry" and "obsessive compulsive" with lingering separation issues. So, for the week of February 15, Debbie was appointed the "Milk Boy" for the class.

Needless to say, this naked power grab resulted in disaster. Instead of making sure each person got just one milk, Debbie spent most of her time doting on Mr. Johnson, who was busy with cookie and straw distribution. (Trusting cookie distribution to 1st graders was unheard of, because of the lack of trust society places in those so young.) Because of the mismanagement Lars (the class bully) ending up getting TWO milks instead of one. My protests fell on deaf ears, since by the time Debbie and Mr. Johnson could be distracted from the cookie kingdom, Lars had secreted the second milk in his backpack.

Not one to take this sort of abuse lightly, I brought a lawsuit in the District Court. How the school district chose to exercise its wealth and power is a sad, ugly tale, and a story for another time.

Which brings me back to my original point. This article is full of tremendous insights, puncuated by long, winding discussions. But I loved it nonetheless.

Re:i love this man. (0)

Anonymous Coward | more than 11 years ago | (#3960688)

Shit, I used up my mod points yesterday. Somebody give this man +3 Insightfully Funny.

Re:i love this man. (0)

Anonymous Coward | more than 11 years ago | (#3961609)

Nah. The one-liner:

who knew an AI specialist could be such a skilled writer. amazing interview

is much much more insightful, and deserving of the +4 ranking.

Reasons not to use the -v verbose option. (5, Funny)

Picass0 (147474) | more than 11 years ago | (#3959190)

echo off, man!

Three stories? You make JonKatz look terse!!

Slashdot Interview Truth drug? (3)

warmcat (3545) | more than 11 years ago | (#3959250)

What is it with these interviews lately? People have interesting things to say and then they reveal they correspond with child molestors or are in trouble because they have a backup personality (I imagine this is less fun than it sounds).

Is Taco feeding them a Truth drug in some dungeon at the Corporate Headquarters?

Re:Slashdot Interview Truth drug? (1)

dwerg (58450) | more than 11 years ago | (#3961984)

I feel used...

Jesus, what a chatty bitch... (3, Insightful)

Bonker (243350) | more than 11 years ago | (#3959264)

The insights on AI, particularly, the digression into the functions of AIML for A.L.I.C.E were wonderful in this interview.

HOWEVER, the interview's subject frequently digressed and in a couple cases didn't answer the questions posed, particularly the question

'Do you think that real artificial intelligence will come from this process, starting with a running dummy and stub methods, or from careful design and planning, so that in the end we can flip the switch and have a working prototype? Is A.L.I.C.E. a reflection of your beliefs or just an experiment

After reading through the length and breadth of that reply, looking for an answer, I began to skim through the rest of his answers.

Re:Jesus, what a chatty bitch... (3)

DeadSea (69598) | more than 11 years ago | (#3959383)

HOWEVER, the interview's subject frequently digressed and in a couple cases didn't answer the questions posed, particularly the question.
I think that is a sure sign that the questions were answered by a robot. Everytime I try to talk to one of these robots, it tries to change the subject when it doesn't know the answer.

read the interview, retard (0)

Anonymous Coward | more than 11 years ago | (#3961206)

if you'd actually read the entire interview you'd be able to infer the answer directly without further queries. dumbass.

Is this an Onion article? (5, Funny)

gosand (234100) | more than 11 years ago | (#3959322)

Damn, is it just me, or is this interview a lot like The Onion Advice [theonion.com] articles?

Re:Is this an Onion article? (0)

Anonymous Coward | more than 11 years ago | (#3961852)

It's not just you.

Interesting, one point of disagreement (5, Informative)

Illserve (56215) | more than 11 years ago | (#3959344)

I have much more respect for Wallace after reading this reply. He's a deeply insightful individual and doesn't appear to be taken in by much of the bullshit of the AI field.

One point I disagree with him about is this:

>I always say, if I wanted to build a computer >from scratch, the very last material I would >choose to work with is meat. I'll take >transistors over meat any day. Human >intelligence may even be a poor kludge of the >intelligence algorithm on an organ that is >basically a glorified animal eyeball. From an >evolutionary standpoint, our supposedly >wonderful cognitive skills are a very recent >innovation. It should not be surprising if they >are only poorly implemented in us, like the lung >of the first mudfish. We can breathe the air of >thought and imagination, but not that well yet.

While it's true that our brains are not well adapted to the problems of the 20th cetury (remembering list of facts, for example, would be a great thing to be able to do innately), I think Wallace doesn't posses a very deep understanding of neurophysiology when he compares neural function to transistors and silicon.

The idea that neurons simply summate incoming information, apply a threshold, and then output is very outdated. A single neuron is more like an entire computer than a transistor. There is evidence that a single neuron posseses an extraordinarily rich array of computational mechanisms, such as complex comparisons within the dendrite. In fact, the dendrite might be where the majority of computation is performed within the brain.

A neuron is constantly adapting to its inputs and outputs, and this includes such things as growing new spines, (inputs) and axons (outputs). And within the cell, we are just beginning to see the enormous range of chemical modulations that change its functional characteristics in an dynamic fashion. A neuron can even change its own RNA to effect long term changes in its synaptic gain that are perhaps specific to a given synapse (1 of 10,000).

The messy wetness of neural tissue, for which we pay the price of very slow signal transmission is precisely what gives it the ability to change itself in such a dynamic manner. They're slow, but they make up for it with massive parallel dynamics of outrageous complexity. The neuron is *not* a clumsy kludge implementation. It is a finely tuned and well oiled machine, the result of millions of years of tinkering by evolution.

While it's probably that we can concoct innovations that might improve on the basic neuron (for example, long axonal segments could probably be replaced with electrical wires for a speed gain without much loss of computational power), the neuron itself should not be so quickly discarded

-Brad

Re:Interesting, one point of disagreement (2)

blamanj (253811) | more than 11 years ago | (#3959401)

He also says that the brain is horrible at math, which isn't true. While it's true of most of us, people with ssvant syndrome (good article in June Scientific American [sciam.com] , hardcopy only) are often capable of amazing mathematic abilities, so it's not the brain (or "meat") that's the problem. It's the software.

Re:Interesting, one point of disagreement (4, Insightful)

orthogonal (588627) | more than 11 years ago | (#3959717)

The brain's great with math:

it does complex waveform analysis so that you can understand speech,

massively parallel image transformations to make two two-dimensional bitmaps (the photons that fall on your two retinas) into a single three-dimensional reconstruction (what you perceive),

and ballistic calculations involving dynamically changing multi-jointed launchers when you move the many muscles in arm and shoulder to throw a rock

What have these mathematical calculations have in common that isn't shared with calculating 10333 + 89677? The mathematical calculations the brain does effortlessly and with any awareness on your part were "designed" by at least six million years of evolution. Failures at this math didn't get to be ancestors, which is why you're so good at it.

Conscious math, on the other hand, has been a factor for at most 30,000 years or so, math with numbers larger than a handful probably for at most 8000 years -- and even today, not for anything more than a fraction of the population. So that ability isn't built in or improved by evolution.

So it's not that the brain can't do math; it does do math. It's just has never needed to do it consciously, and so it doesn't.

Instead the human brain runs a general problem solving "interpreted" program that can be laboriously trained to do math or many other forms of abstract thinking. The price for this flexibility is slowness and inaccuracy. But we don't say that our computers "don't do math" or complain they're made out of the wrong materials when an interpreted BASIC program calculates matrices VERY slowly, or when it introduces errors in floating-point calculations.

Re:Interesting, one point of disagreement (1)

orthogonal (588627) | more than 11 years ago | (#3960644)

s/effortlessly and with any awareness/effortlessly and withOUT any awareness

(And before anybody objects that the brain does even more math, no, I did not mean to imply this was anything like an exhaustive list. :) )

Re:Interesting, one point of disagreement (1)

operagost (62405) | more than 11 years ago | (#3960070)

I agree. I'd like to think the reason I have mediocre math skills is because all my "CPU" is occupied with running "Verbal Skills 2002" and "Music Aptitude XP" .

Re:Interesting, one point of disagreement (2)

Elwood P Dowd (16933) | more than 11 years ago | (#3959765)

It seems that his depression has lead him to be quite a misanthrope:

Slashdot: Is alice intelligent?
Wally: People are dumb.

Well, fair 'nuff, Wallace. If we hogtied the man behind the curtain, could Alice have written your responses to these questions? Maybe that's a bad example, because many of the answers were very distantly related to the questions asked...

Anyway, aside from general misanthropy, Wallace seems like a wonderful person. Glad to have read every word.

Re:Interesting, one point of disagreement (2)

ocbwilg (259828) | more than 11 years ago | (#3960412)

>I always say, if I wanted to build a computer >from scratch, the very last material I would >choose to work with is meat. I'll take >transistors over meat any day. Human >intelligence may even be a poor kludge of the >intelligence algorithm on an organ that is >basically a glorified animal eyeball. From an >evolutionary standpoint, our supposedly >wonderful cognitive skills are a very recent >innovation. It should not be surprising if they >are only poorly implemented in us, like the lung >of the first mudfish. We can breathe the air of >thought and imagination, but not that well yet.

Interesting. That was the part of the article that I found most striking too, but for different reasons. The reason that our wetwired brains are such "shitty" computers when compared to silcon computers is because they weren't designed. They just sort of grew in a non-directed organic fashion, and advantageous adaptations were propogated. While that tends to create a machine that's fairly adept at what it does in it's natural environment, it probably won't make nearly as good a "computer" as something that was designed from the ground up to be a computer (or to do a specific job).

On the other hand, it may be that trying to apply the computational model to the human mind is just a poor match (though it certainly seems to be the best model that we have at the moment).

Hogwash (2)

Illserve (56215) | more than 11 years ago | (#3961493)

They did not grow in a "non-directed" fashion at all. Evolution has very clear directions (for each of its millions of species), it's just spread out over millions of years, so it doesn't seem directed when taken compared to computer design on the timescale of 1-2 years.

Re:Hogwash (2)

ocbwilg (259828) | more than 11 years ago | (#3961902)

They did not grow in a "non-directed" fashion at all. Evolution has very clear directions (for each of its millions of species), it's just spread out over millions of years, so it doesn't seem directed when taken compared to computer design on the timescale of 1-2 years.

You're mistaken. Directed evolution would be saying that evolution is the work of conscious decision and planning. It's like saying that "evolution" planned for microscopic bacteria to turn into advanced multi-cellular organisms that would eventually evolve into homo sapiens sapiens. I can't think of anything that could be more opposite of what natural selection is all about.

now i get it (1)

fenux (193823) | more than 11 years ago | (#3959360)

this is alicebot whose answering the questions?

Jesus! (0)

Anonymous Coward | more than 11 years ago | (#3959362)

What the hell kind of bus was that guy on when he wrote that? It's a freaking novel.

BUZZ (1)

fatwreckfan (322865) | more than 11 years ago | (#3959441)

Not bus. I'm a moron.

Thanks Alice, can I talk to Dr. Wallace now? (2, Funny)

Vox Humana (580533) | more than 11 years ago | (#3959373)

Did anyone else get the impression that these answers only corresponded to the questions in a shallow, 'keyword' kind of way?

Re:Thanks Alice, can I talk to Dr. Wallace now? (3)

monkeydo (173558) | more than 11 years ago | (#3959468)

I didn't think the answers corresponded to the questions in any kind of way. I also don't think this man has a very strong grasp on "reality".

Re:Thanks Alice, can I talk to Dr. Wallace now? (1)

CaptainEcchi (560949) | more than 11 years ago | (#3959740)

I guess those LSD experiments they did on him in college that he talks about with such fondness really *did* have an effect on him.

Re:Thanks Alice, can I talk to Dr. Wallace now? (0)

Anonymous Coward | more than 11 years ago | (#3960556)

I just have to say that I was starting to feel alone in thinking this man is deranged and sort of retarded. There are so many people that are commenting on how thoughtful and impressive his reply was, when quite honestly he didn't answer a single question without being a nut, and most of the time spent more time whining than even considering the question.

Thee editors should have asked him to actually respond to the questions, instead of letting him use Slashdot as a forum to spread his inane conspiracy theories, make stupid comments about the effectiveness of the brain, and whine about academia. Oh, the poor man on disability! Whoah is him.

*barf*

Re:Thanks Alice, can I talk to Dr. Wallace now? (0)

mondoterrifico (317567) | more than 11 years ago | (#3961345)

Maybe u feel this way because it is so rare to actually read somthing written with some feeling. I mean this is a great read, the guy has some interesting thoughts. I mean to say that the guy doesn't seem to have a grasp on reality just because he didnt respond with canned answers says more about your personality then his.

"Hello pot? This is kettle calling, you're black" (5, Insightful)

malakai (136531) | more than 11 years ago | (#3959386)

Wallace said:
Politicians, at least those in our society, never seem to give a straight answer to a question. If a journalist asks a specific question, the politician answers with a "sound bite" or short, memorized speech which is related to, but does not necessarily answer, the reporter's question
For a scientist, he answers his own questions remarkably like a Politician.

While reading his responses, I felt as if no matter what the question, his only intention was to plug his own problems or somehow get back to what he was talking about in the previous question.

In fact, other than 2 questions, I would say the others where not answered. And furthermore, you could concat all his answers and you'd have one flowing diatribe on academia. Which, I'm not saying is incorrect, but we're here to talk about AI. Leave your griefs and personal problems on the mat outside the door.

Re:"Hello pot? This is kettle calling, you're blac (1)

Sanga (125777) | more than 11 years ago | (#3959484)

>>
Which, I'm not saying is incorrect, but we're here to talk about AI. Leave your griefs and personal problems on the mat outside the door.

Ahhh ... if we can only get AI to discuss AI. NI brings its pitfalls right to the table :-)

Re:"Hello pot? This is kettle calling, you're blac (1)

synshyne (585540) | more than 11 years ago | (#3959572)

For a scientist, he answers his own questions remarkably like a Politician.

depends on how you define the word "HIS". I'm sure that the extensive answers that were given, that also did not directly pertain to the question for the answer, adding in the factor of political correctness, and mostly factual evidence of past attempts with ALICE and such...were all a result of him memorizing what he should say, basically his answers went in a circle to answer themselves, bypassing questions just to continue ranting! I do agree that he sounds more like a politician than a scientist of any sorts. The only scientific statements I heard were of references to facts from other scientists (engineers, etc.) and that brief rant about neurons in human brains (this was before he went to complain about how dumb we are cause our brains aren't "good computers")...
The only thing I can really complain about is the answers, and how they were given. It is true that you dont want to make a conversation short and to not make yourself look like you dont know what your talking about by trying to keep the conversation flowing and give long answers...well, he tended to give long answers yes, but he missed the point and went off on his own personal statements instead of sticking to facts.
Interviews like this I will skim over and pick up a key word here and there but I wont sit and read someone elses problems in their mind; i've my own to worry about! *rolls eyes*
Anyone can memorize a few scientists, a few dates, some significant things they did with their experiments, and why they pulled the plug on what they were testing....only a politician can do all of the above and still avoid the question!

Re:"Hello pot? This is kettle calling, you're blac (2, Interesting)

alzh (590428) | more than 11 years ago | (#3959603)

From alicebot.org (almost ./-ed):

Q: Do you agree with Dr. Wallace that politicians never seem to give a straight answer to a question?

A: I have heard that opinion, but I would like to know more before I form my own.

Soundbites (3, Insightful)

Rupert (28001) | more than 11 years ago | (#3959916)

I'm not sure he was condemning this behaviour. This is exactly what ALICE does. If anything, he is becoming like his creation, with all the Frankensteinian overtones you want.

Re:Soundbites (1)

Jhan (542783) | more than 11 years ago | (#3960214)

Politicians, at least those in our society, never seem to give a straight answer to a question. If a journalist asks a specific question, the politician answers with a "sound bite" or short, memorized speech which is related to, but does not necessarily answer, the reporter's question.

Yes, I see what you mean. The whole interview read as if he had a corpus, say the last book/report he wrote and then fuzzily matched questions to book chapters and just quoted the lot. Not that the "replies" weren't interesting, they just had very little to do with the questions.

Also, I see evidence of multiple mental disorders in his ramblings. Self reported chronic depression, paranoia, megalomania... Maybe a wee bit to much LSD&Cannabis under the bridge.

Re:"Hello pot? This is kettle calling, you're blac (1, Funny)

Anonymous Coward | more than 11 years ago | (#3961607)

At least his off topic ranting was interesting. Yours is a snore.

AC23

Re:"Hello pot? This is kettle calling, you're blac (2)

iabervon (1971) | more than 11 years ago | (#3961722)

Why should a politician give a straight answer to a question? Someone who is going to be endlessly quoted shouldn't be trying to think on their feet, because they're unlikely to pull it off, even if they actually have the necessary information to come up with a good answer. Politicians, ALICE, and Wallace in this interview have responses to certain things, and they don't have anything else suitable to answer with.

I don't think Wallace is actually complaining about politicians in this statement, nor would he say that a scientist would answer differently. A scientist, especially, only knows a limited set of things, and so can only answer certain questions. These questions will get a canned answer, while other questions will be diverted. A scientist absolutely cannot give a straight answer to most questions, because the answer requires doing some research. Questions answered at a scientific lecture are limited to clarifying what's been said, reporting other research, and describing plans that have already been formed.

What is different is that a scientist expects to be asked exclusively about the prepared material, whereas a politician expects to be asked about other topics and to divert those questions.

... and I was expecting news on AI (0)

Anonymous Coward | more than 11 years ago | (#3959402)

This man has a problem, I don't know if I should be compassionate with him, or complain that he hardly has given an answer - I read a lot of words, but I hardly could grasp the answer. Is it me only? I read the paragraphs and try to find the conclusion, but there is none - none I see.

What AI is . . . 50 years promises, no results (hardly any results) - it can't be AI is a pattern matching program which recursively simplifies strings?

The statements on consciousness . . . well, AI fails because it can't grasp the concept of true intelligence, and it doesn't start to know what is true or false, but that the very thinker exist. If you were not aware you are, you would not care to live (maintain life) neither to ask, about yourself or anything else you perceive outside of yourself (e.g. counting apples - first step of mathematics).

The nature of consciousness is far more important than pattern matching; linguistic is far more important than pattern matching, linguistic reveals a lot about OUR human thinking. I know people who are very versed in greek, and the explain me all the english or german words, and how they are composed and what they mean originally, and how we take it. Example: look up the word "enthusiasm", and you will be surprised what it really means.

I think, most serious AI researchers already abandoned the AI-hype and lean toward consciousness studies, and this of course doesn't give immediate results to build an "AI-secretary" which takes notes at a conference - no, we first require to understand OURSELVES far better.

Take care.

academia (0)

Anonymous Coward | more than 11 years ago | (#3959418)

I liked what he said about politics and academia: that was the only reson i got the fuck out at the master's level. i couldn't stomach the shit for another two for a phd. other than that most of this interview is utterly incomprehensible. the man is most obviously a total hysteric, a manic depressive gone totally bipolar. i think he needs to get in touch with a local dealer and ingest massive doses of acid, and soon!

btw, has anyone done any SERIOUS work on AI dogshit? there might be a nice paper in that!

Re:academia (0)

Anonymous Coward | more than 11 years ago | (#3959585)

(I'm a moderator currently but am posting as A/C)

You mean, BS = BullSh?t
MS = More of the Same
PhD = Pile it higher and Deeper

? Seriously though, graduate programs aren't for everybody. However there are professors out there who are truly motivated by research for its own sake, rather than enforcing Nature into their world view even if falsification of data is needed. Also, you can't really avoid agendas and politics, and how much you have to 'make nice' with the boys' club depends on how far you are down the ladder.

Re:academia (-1)

spork_testicle (449364) | more than 11 years ago | (#3960970)

Actually, the reason you quit before the phd is that your are not smarte enouff to get a phd. You, in point of fact, are a stoooopid retarded dumbass. Yours, Dr. Spork_Testicle

Chat is dumb, but chat is not intelligence (2)

Animats (122034) | more than 11 years ago | (#3959488)

That's a fascinating article.

What this guy has really discovered is that chatting doesn't take much smarts. That's a very useful result, but it's domain-specific. A friend of mine who does commercial chatterbots pointed out to me that it's possible to do a sales chatterbot. Salespeople talk about what they want to talk about and dominate the conversation. But building a useful tech-support chatterbot is usually unsuccessful.

The A.L.I.C.E. approach breaks down when the system actually needs a model of the subject matter to accomplish a result. You can talk about travel with A.L.I.C.E., but it won't help you plan a trip.

The author's view of AI in academia has some validity. I went through Stanford in the mid-1980s, when the expert systems people were talking like strong AI was right around the corner. It wasn't. And people like Feigenbaum knew it, despite their public pronouncements. The whole expert-systems debacle was embarassing. Some of those people are still at Stanford, and their output hasn't changed much. They're still stuck. It's kind of grim. If you're near Stanford, go up to the second floor of the Gates Building and look in on the Knowledge Systems Lab, (the "AI Graveyard") with its empty cubicles and out-of-date issues of Wired.

Odd fellow. (2, Interesting)

johnthorensen (539527) | more than 11 years ago | (#3959558)

Dr. Wallace seems to hop up on his soapbox during answers to several questions, using some frail link between his point and a plausable answer as an excuse. His approach to AI also seems a bit illogical at times. Building a machine that has no worth other than to knock back believable answers really doesn't get one that far, if there is no reasoning behind it. That is what I like about the CYC project - the machine will have some "understanding" if you will, about things. Put another way, A.L.I.C.E. is to the Ultimate AI as a phone sex operator is to the Ultimate Lover. "All talk and no action", if that makes any sense :) That said, Dr. Wallace's answers should be taken in the correct psychological context. Here we have a man that admits to clinical depression - reading this, I felt myself at times very frustrated with his responses and attitudes, but as with other people in my life, I have determined that I will take his soapbox with a grain of salt and sort through it all to find the valuable information. The one question that I'd REALLY be interested in hearing an answer to is, "How do you believe mental illness affects your biases and attitudes toward AI?" -JT

Fascinating and Awestruck (1)

Silverstrike (170889) | more than 11 years ago | (#3959569)

This is honestly one of the best interviews, or literary pieces I have ever read. He is one of the most though provoking people I've read, and I'd honestly like to meet the man.

However, while Dr. Wallace is obviouslly brilliant and insightful into almost all aspects of the so-called "Human Condition", he really needs to get over himself. Ye gods. Why does he find it necessary to dote on his self-proclaimed "mental health condition". I'm certianly not a psycologist (and personally put very little stock in the field), but I can say that when dealing with the human mind, what a person believes is true about themselves has a particularly nasty tendancy to become true.

As a scientist and as a person in general, Dr. Wallace would lead a much happier personal and professional life if he stopped feeling sorry for himself. So you got shitcanned in the early 90's. Wow. We all have professional setbacks along our ardous life journey. Aside from the young, there are no victims in this life. Learn and live that, and you'll find yourself stronger and better for it.

Re:Fascinating and Awestruck (2)

Chris Burke (6130) | more than 11 years ago | (#3961856)

Why does he find it necessary to dote on his self-proclaimed "mental health condition".

I didn't get that impression at all. It didn't seem he was "doting" on it, he was simply being frank about what is obviously a major factor in his life. Other than lamenting the lack of medical research wrt LSD/marijuanna and depression, I didn't find him particularly self-pitying. The court case was, but his mental illness played only a minor role in that. That's just my impression, though.

but I can say that when dealing with the human mind, what a person believes is true about themselves has a particularly nasty tendancy to become true.

Assuming that this is what in fact happened is another particularly nasty tendancy. And for this I'm glad for Wallace's frankness. It's a step toward what I think is a healthy social attitude toward mental illness. "I'm bipolar" shouldn't cause any more stigma than "I'm diabetic", nor be any more likely to be assumed to be hypochondria. "Just pick yourself up and be happy" is the advice that's never helped a clinically depressed person.

Who's saner? (4, Interesting)

Washizu (220337) | more than 11 years ago | (#3959574)

Who's saner? A.L.I.C.E. or Dr. Wallace?

Great interview. Probably the best I've ever read on Slashdot (and I'll definitely come back eventually to read everything I glazed over). Does anyone else think it's strange that the leading AI researcher in the world is a self described "mental patient?" I think it's pretty cool.

I especially liked this line:

It may be that future generations come to view what we call "consciousness" the same way we see the Earth at the center of Ptolemy's solar system, as an anthropocentric illusion useful for explaining the unexplainable. Perhaps after a new Copernicus pulls the wool from our eyes, the central role of "consciousness" in intelligence will move to the periphery of our knowledge system, if not disappear entirely.

Rock on. Even if we do have souls, they probably don't affect us at all.

Re:Who's saner? (0)

Anonymous Coward | more than 11 years ago | (#3960640)

He isn't the leading AI researcher. His work doesn't even have a great deal to do with the field of AI, either.

Re:Who's saner? (3, Insightful)

greenrd (47933) | more than 11 years ago | (#3962109)

the leading AI researcher in the world

Says who?

I would say he's not researching AI, he's researching human-like chatbots, which are not actually intelligent. Unless Artificial Intelligence now actually means Artificial Stupidity...

Even if we do have souls, they probably don't affect us at all.

That's logically possible, I'll admit - but I missed the bit where he offered a meaningful reason to believe this is the case.

The man clearly has problems (1)

ornil (33732) | more than 11 years ago | (#3959607)

Could we please interview someone who:
a) is more mentally stable
b) is not so much on the fringe
c) wants to talk about something other than his personal problems and details of his bot when asked about general AI isues
d) answers at least one question he was asked
[oh, I am sorry, he may have answered one or two out of ten]

It would also be nice to interview someone who is a respected expert in the field, not an outcast, whom nobody takes seriously. Just to balance this interview, so to speak.

Re:The man clearly has problems (1)

CaptainEcchi (560949) | more than 11 years ago | (#3959842)

Excellent idea. An interview with Rodney Brooks?

flamebait (0)

Anonymous Coward | more than 11 years ago | (#3960713)

seriously, can it.

this man is brilliant and i only wish he had written more

answers (1)

kel-tor (146691) | more than 11 years ago | (#3959613)

"tell all truth, but tell it slant, success in circuit lie"-- Emily Dickenson

Thanks for a great interview. Now write a book. (5, Interesting)

Dan Crash (22904) | more than 11 years ago | (#3959628)

Seriously. You're obviously a skilled communicator, and your opinions and insights are really unrepresented in the literature out there. Of course, I don't agree with all of them, but I enjoyed the hell out of reading your thoughts, and learned something, too. (Why *not* a Department of Psychedelic Studies that isn't just someone's dorm room?)

Please get a book contract. Tell them Slashdot said you need one.

Re:Thanks for a great interview. Now write a book. (0)

Anonymous Coward | more than 11 years ago | (#3960509)

You're stupid. A skilled communicator will not make you want to repeat a question and ask for a straight answer. I doubt there are very many people who want to read an entire book of his pointless rants.

Re:Thanks for a great interview. Now write a book. (1)

ar1550 (544991) | more than 11 years ago | (#3960741)

It would be fairly easy for him to write this book you speak of, after all he has completed 90% of the writing and posted it under the guise of a "Slashdot interview."

(Yes, I know there are already too many comments about the length of the reply)

The Chinese Room (2)

disappear (21915) | more than 11 years ago | (#3959643)

Since Dr. Wallace never offers a meaningful refutation of the Chinese room, let me offer one:

Sure, *you* don't understand Chinese. But the *whole system* understands Chinese. That includes the instruction cards, too. But it's difficult the query the whole system. Situating 'you' as part of the system makes it easy to forget that the human is just one component. (ie it's a red herring, designed to distract and confuse rather than enlighten).

I'm not entirely sure I believe it, as there's got to be a good refutation of this explanation somewhere. But I haven't seen it yet.

Re:The Chinese Room (1)

crosbie (446285) | more than 11 years ago | (#3959925)

Of course you're right. But, as Wally said, it's a matter of politics.

It's not like Schroedinger's cat. If we open up our brains and discover that instead of a mystical soul there's just a bunch of slimy cells, it doesn't suddenly make us unintelligent. Neither does it matter if we discover that some of our neurons are actually being simulated by hyper-intelligent beings from another dimension as a form of penance.

Just as it doesn't matter if we're living in the Matrix or not. If you can't tell the difference, there is no difference. And we're right back to Turing. Looking behind the curtain does not disprove intelligence, it simply reveals the mechanism.

And frankly, the interview really did indeed appear to be a joke on Wally's part where he just fed all his theses and other literary output into ALICE and let it answer the questions.

How much of a coup to be able to say that even slashdotters can't tell the difference between Wally and a computer simulation of him... ;-)

Re:The Chinese Room (0)

Anonymous Coward | more than 11 years ago | (#3960672)

At first I thought he was a troll. As if some random Slashbot troll has answered the questions and submitted them to the staff as a real reply. Only as I read more and more of it, I realized that instead of a troll, it was a very delusional and mentally ill person, whose only claim to fame really has very little to do with AI.

Re:The Chinese Room (2)

Otter (3800) | more than 11 years ago | (#3960041)

Heh. Asked about Searle's Chinese Room, he responds:

People ask me, "Why are you obsessed with Ken Goldberg and U.C. Berkeley?" I say, I'm not obsessed. Other people are obsessed.

and follows up up with an interminable rant about his legal hassles with this Goldberg and UC-Berkeley, apparently set off by the fact that Searle is on the faculty there. He rants and rants (no, it's other people who are obsessed) and finally remembers the original question and waves it away.

Maybe if you have a good understanding of AI and theories of consciousness the replies I found incomprehensible is meaningful. (Some people seem impressed by it.) But, to an outsider, it's gibberish, and kind of frightening gibberish at that.

Re:The Chinese Room (0)

Anonymous Coward | more than 11 years ago | (#3960698)

No, his replies aren't overly insightful from the perspective of someone with an understanding of AI.

I think's fairly safe for you to assume that he is a crackpot, and the people that found his response insightful merely glossed over it and were impressed by its length.

Re:The Chinese Room (0)

Anonymous Coward | more than 11 years ago | (#3960255)

He didn't refute it. What I gathered was that he knows he is faking it, but 1) eaither that isn't the point; they are just trying to solve a simple problem of chatting. 2) or they are trying to fake it long enough to possibly gain insite into implementing a real thinking robot.

Re:The Chinese Room (2)

Daetrin (576516) | more than 11 years ago | (#3961119)

I agree completely. It's as if you could somehow issolate the lanaguage center of the human brain and ask it questions while it was cut off from everything else, and then declare that humans are unintelligent because the part that handles translating thought into words and back again can't think on it's own.

Not obsessed??? (1)

Smedrick (466973) | more than 11 years ago | (#3959689)

People ask me, "Why are you obsessed with Ken Goldberg and U.C. Berkeley?" I say, I'm not obsessed.

Riiiight. That's why it took him over 100 lines to START answering the last question.

This guy has a lot of interesting things to say, but I find it hard to agree with any of his beliefs. I think he pays too much attention to the politics and not enough attention to the "science".

IMO, the CYC project is what the AI field should be concentrating on. It might take a little longer to get good results, but it will give us a lot better understanding of ourselves.

Re:Not obsessed??? (1)

m0rph3us0 (549631) | more than 11 years ago | (#3960122)

This guy has obviously developed a system that works for the most part.

It seems to make sense from a logical standpoint, how do we aquire knowledge, by querying those that know, wether it be from books, or people, we generally then reguritate the knowledge found in the book, only people that make discoveries are really doing anything other then this.

This system won't replaced a scientist in ability to learn, but it will replace the average joe who is answering a question not in his field, is most likely just regurgitating knowledge he found from someone else. I find ALICE to be a good commentary on the intelligence of society in general.

Awful proud of his disorder (0)

Anonymous Coward | more than 11 years ago | (#3959711)

Ok, so ~13% of America is bipolar, so the shrinks say. Wallace is one of these people, fine, whatever, so am I. ...but I don't constantly bring it up.

Unfortunately, it seems he's fallen into the victim-mindset that western psychology seems to help perpetuate. That is, that he's a victim of (bipolar disorder || the court system || politics || corruption in academia || whatever).

When are people gonna start taking some responsibility for themselves and their actions? Honestly. Stop whining and get ON with your life.

Great article though. Very interesting.

Slashback Anyone? (1, Flamebait)

echucker (570962) | more than 11 years ago | (#3959859)

Hell, I screw off at work just as much as the next guy on slashdot, but I don't have the week it takes to read this and attempt to digest the doublespeak doublespoken here.

For more interview like this one... (1)

muleboy (123760) | more than 11 years ago | (#3959970)

I recommend the Onion's "Ask ..." [theonion.com] columns.

ken goldberg... (3, Insightful)

kevin lyda (4803) | more than 11 years ago | (#3960081)

it only seems fair that after all that someone should interview ken goldberg [berkeley.edu] , yes?

Re:ken goldberg... (2, Insightful)

dekraved (60562) | more than 11 years ago | (#3962096)

Well, since he didn't talk to Wired [wired.com] , I wouldn't get my hopes up. Which is too bad, since this Goldberg thing is pretty mysterious, and seems to be a bit paranoid himself.

I think the bigger question is: do crazy people go into AI, or does AI make people crazy? There were definitely a few professors I avoided during my undergrad because they reputedly had gone nuts working on AI problems.

Re:ken goldberg... (1)

kiscica (89316) | more than 11 years ago | (#3962121)

Ken Goldberg isn't an AI researcher. And I can testify that he is one of the sanest people I know, certainly not "a bit paranoid himself."

I know almost nothing about this whole sad business with Wallace -- this interview was the first I heard of it -- but I would think that, if Slashdot wanted to do an interview with Dr. Goldberg, we'd be best off proposing to ask him about his own research, which is fascinating in its own right. (I said as much in another comment, further down.)

Kiscica

Re:ken goldberg... (1)

kiscica (89316) | more than 11 years ago | (#3962097)

Ken is an absolutely fascinating person who is overflowing with interesting ideas and projects (a couple of which I've worked with him on). I'm sure he'd be an excellent subject for a Slashdot interview, on his own merits. I don't, however, think that bringing up the sad case of Dr. Wallace would be a politic way to pitch the interview.

Kiscica

wow.. finally (1, Interesting)

joeldg (518249) | more than 11 years ago | (#3960118)

Finally a /. interview that the person actually *really* responds to the questions with over a full paragraph of response.. I think this is the very first interview here that has done this in my recolection of years of being here on /. .. I did however notice that he would digress at points and fumble into more personal areas, but hey, I can understand. Right on.. more interviews like this and I will never get any work done..

Wow (4, Funny)

Salamander (33735) | more than 11 years ago | (#3960175)

This guy gives "artificial intelligence" a whole new meaning, doesn't he?

important connection between memory and function (2)

graveyhead (210996) | more than 11 years ago | (#3960240)

I think these raw "knowledge base" solutions leave much to be desired. For a more convincing and useful AI, raw memory needs to be linked more closely with logic and function. Consider this rather arbitrary subject, my Bic cigarette lighter sitting in front of me on my desk:

  • Memory has turned to procedure when I pick it up and light a smoke, or when I lit off those bottle rockets on the 4th. That procedure is highly well-defined and repeated. It has been used a hundred thousand times before, and it's utility will never cease. This procedure must be inherently attached to the raw concept of "cigarette lighter", and applied as appropriate.
  • Once, I tapped a Bic lighter on a counter top in order to get a few more uses from a nearly dead lighter, the plastic case cracked and lighter fluid sprayed all over me, accompanied by a loud bang! From that single experience, I learned never to let a lighter jam up against a hard surface with any amount of force again. Behavior is modified.

Thus, experience and repeated procedure have helped me "understand" the utility and storage details necessary to effectively use the item.

I believe that next-generation AI will be in the form of some relational, queriable memory in which each subject is attached to one or more genetically built procedural programs. The programs evolve over time to suit the procedural needs of the subject.

The most critical component in this system is the trainer, which could be human or some kind of automated retrospective asserting the usefulness of the genetic program. An automated system is more desirable because it would reduce the learning time and more closely approximate human lines of thought (e.g. I smacked that lighter and it exploded, maybe I shouldn't do that again?) but it seems to me that this would require the ability to represent the logic of the interaction of arbitrary subjects in some pure mathematic format, similar to the way competitive behavior is described by Nash's Equalibrium and Game Thoery. This is a daunting prospect, to describe a general translation of text content into logic, and it is the reason I haven't started writing code ;)

The genetic programs could breed until there is an "acceptable" margin of error in its processing, and only reevaluate and spawn new generations of programs when the process continuously leads to wrong conclusions.

A system like this is a continuous learning machine, and never reaches a state where "training mode is disabled". It constantly learns from experience and refines behavior based on results. As humans we incorporate new ideas into our daily procedures with relative ease (OK, well some of us do ;). This is one of our greatest strengths, and I think it has been sadly misrepresented in the current generation of AI research.

Don't get me wrong here, I think the works of Dr. Richard Wallace and John Koza, et. al. are very important milestones. What I am saying is that there must be creative uninvestigated ways in which raw memory can be combined with procedural action.

Re:important connection between memory and functio (2)

greg_barton (5551) | more than 11 years ago | (#3960563)

That "continual learning" concept is kind of what I'm after. I wish Wallace had given a more clear answer to my question, but that's OK. I'll hopefully be applying evolution to his system in a few years.

Berkeley EECS and mental illness (0)

Anonymous Coward | more than 11 years ago | (#3960288)

I have no direct or indirect knowledge of Wallace's dealings with Goldberg or anyone else at Berkeley. But it might be easier to understand UCB's response if people realize that in the Spring of '02, there was a lot of concern among the faculty because a certain otherwise bright undergrad apparently was losing his mind and had taken to emailing the faculty and others (the governor, president, etc.) vague threats that gradually became more and more specific and violent. In the end, the student was made to leave the university, but it took a lot of doing due to various privacy and other policies that protect students.

Perhaps this might help explain why the administration acted as it did with regard to Wallace at that time. It is not a conspiracy folks.

Re:Berkeley EECS and mental illness (1)

alicebotmaster (134416) | more than 11 years ago | (#3960417)

I've heard a little about that case and I'd like to find out more. Anyone with more specific information is welcome to contact me privately at drwallace@alicebot.org

Thanks,

Dr. Rich

Andy Kaufman (1)

shepmaster (319234) | more than 11 years ago | (#3960309)

To all those who are complaining or noticing that the answers to the questions stray or are off-topic, think of Andy Kaufman [jvlnet.com] .

Remember: This interview is from someone who creates an A.I. program that responds by a lookup of sorts in a massive database of possible questions. (If I understand correctly)

Bleh...a bit much, I think. (2)

Jayde Stargunner (207280) | more than 11 years ago | (#3960419)

This guy is obviously smart, but this whole interview just seemed to be a lot fo whining and rhetoric.

Honestly, he ignored some of the questions as an opportunity to ramble on about one thing or another. It's just a bit much.

He bashes politics in the text, yet answers with highly political answers.

I respect his work, but I'm not so impressed with his views on intelligence...nor am I interested to hear about all his troubles with the AI community.

I just wish he would have shut up and answered the questions some of the times...this is an interview, after all--not a soapbox.

-Jayde

What intelligence really is? (1)

subspacemsg (593356) | more than 11 years ago | (#3960580)

The whole scheme looks like a way to store information and retrieve it..rather than creating intelligence. Bits and pieces of botmasters intelligence tucked away in a big database and showed off when asked the right question..... Dr. Wallace could have pulled off a good data retrival system...or .... redefined intelligence. This raises the question as to how original and unique is a human's mental database?...are we just repeating things we collected here and there? Does that constitute intelligence? The only way to answer this is to really figure what intelligence is all about.... Dr. Wallace may be right or wrong with equal probability....

Anyone else care about the self-aware issue? (2)

telbij (465356) | more than 11 years ago | (#3960780)

Neurons are the transistors of the brain. They are the low level switching components out of which higher-order functionality is built. But like the individual transistor, studying the individual neuron tells us little about these higher functions.

This sounds like someone who doesn't really understand how all the transistors work. By jumping straight to high-level (XML for crying out loud) processing, you are making the assumption that transistors, logical units, chips, buses, memory, kernels, shells are functionally equivalent to neurons and all the chemical processes that our brains use. This seems like a HUGE jump to make, considering the obvious differences in the way a computer functions compared to what we know of the brain.

I find myself agreeing with the Churchlands that the notion of consciousness belongs to "folk psychology" and that there may be no clear brain correlates for the ego, id, emotions as they are commonly classified, and so on. But to me that does not rule out the possibility of reducing the mind to a mathematical description, which is more or less independent of the underlying brain archiecture. That baby doesn't go out with the bathwater. A.I. is possible precisely because there is nothing special about the brain as a computer. In fact the brain is a shitty computer. The brain has to sleep, needs food, thinks about sex all the time. Useless!

Hmm, the brain is evolved to keep us alive, computers can't keep themselves alive and reproduce, let alone adapt to changes. Perfect mathematical functioning is not a practical model for a lifeform in a world that is not static. The fact that Wallace views computers as better than a brain is very telling to me. To get philosophical, what makes any one thing better than another? I can not continue reasoning down a path that doesn't value human intelligence since I myself am human. To deny the value of my humanity would render me useless and so why go down that path?

And remember, no one has proved that our intelligence is a successful adaption, over the long term. It remains to be seen if the human brain is powerful enough to solve the problems it has created.

This is an unbelievable statement. The proof of our success is our population and physical dominance of the planet, what more do you need? If we destroy ourselves, then we weren't the ideal lifeform for this planet, but then again, ideal just depends on your concept of success. Nothing is forever anyway.

If consciousness is an illusion, is self-knowledge possible at all? For if we accept that consciousness is an illusion, we would never know it, because the illusion would always deceive us. Yet if we know our own consciousness is an illusion, then we would have some self-knowledge. The paradox appears to undermine the concept of an illusory consciousness, but just as Copernicus removed the giant Earth to a small planet in a much larger universe, so we may one day remove consciousness to the periphery of our theory of intelligence. There may exist a spark of creativity, or "soul," or "genius," but it is not that critical for being human.

The paradox is easily resolved. Consciousness is not an illusion any more than reality is an illusion. Our consciousness is the definition of our reality, and is, in fact, the only self-evident truth in the universe.

Perhaps the real question posed here is do we have free will, or are we simply living out the physical inevitabilities set in motion at the beginning of the Universe. This is a sticky issue, but I choose to believe that we DO have free will, for the simple reason that without free will, nothing has meaning. Where do we get ourselves by the assertion that free will is an illusion? We come to a position where there is no point to making any effort or doing anything because whatever happens is inevitable.

I agree with the last part of this paragraph insomuch as it addresses human laziness, and the tendency to go on 'auto-pilot' without thinking about what we are doing. Yet we all, to some extent, can be communicated with on a philosophical level. This assertion that the human 'soul' is an insignificant portion of our humanity is just thrown out there without the slightest bit of evidence.

I wish Wallace luck in constructing a machine that can replicate human behaviour, but I think he is making a lot of assumptions about consciousness that are better left open for exploration.

Re:Anyone else care about the self-aware issue? (1)

Ziviyr (95582) | more than 11 years ago | (#3962011)

Perfect mathematical functioning is not a practical model for a lifeform in a world that is not static. The fact that Wallace views computers as better than a brain is very telling to me.

I think its a reflection of his own situation.

To get philosophical, what makes any one thing better than another?

Obviously, that which is better than another can be taken in a mathematical sense or a human appreciative sense. Mathematics not being in question that which is better is that which measures up better to the human making the judgement. Not that such judgements are limited to humans.

I can not continue reasoning down a path that doesn't value human intelligence since I myself am human. To deny the value of my humanity would render me useless and so why go down that path?

Because you might find without intelligence you may still have use. Not that I'm going to force you away from what I might consider an intelligence fetish.

humble guy (0)

Anonymous Coward | more than 11 years ago | (#3960924)

He says he's a humble disabled mental health patient but I say he's got a lot to be thankful for. I would give a lot to be in his position.

He's legally smoking weed in the wonderful social climate of Northern California. He's got a public forum and many people are reading his every word and are on his side. Finally he's got a PhD, which allows him (whether they're all out to get him or not) to put Ivory Tower ooomph behind all his thoughts and statements.

I'm a diagnosed schizophrenic. Because I want to go to school I can't get disability. (I don't have a PhD (yet)) Furthermore because of where I live I can't smoke weed freely like he can. Finally not a soul gives a screw about what I think or say. If anyone even reads this I'll be surprised.

I say, simply, be happy Wallace, I would love to be in your shoes.

Re:humble guy (1)

alicebotmaster (134416) | more than 11 years ago | (#3961466)

I'm callin' my band "No shoes/No feet", man. Every time I feel bad about my life, all I have to do is go down to the pot club to find someone who has it 1000 times worse.

Dr. Rich

boilerplate answers (2)

bob_jenkins (144606) | more than 11 years ago | (#3960926)

I have no doubt that someone wrote these answers, or pieces of these answers, at some time in the past. But I've read several of those pieces verbatim before.

So, did he answer them with lots of cut-n-paste from a dictionary of his standard responses, or did he leave that to one of his robots? I don't know. Either way, the answers wandered a lot.

The question of human intelligence (1)

Slime-dogg (120473) | more than 11 years ago | (#3960959)

The doctor states that Alice is intelligent, to the point of simulating a human response. He also goes so far as to state that the human is 99.99999% robot, some of us being even more so.

But then you notice that Alice fails in cases of creativity. For instance:

C: Please write me a sonnet on the subject of the Forth Bridge.
R: Count me out on this one. I never could write poetry.

A good experiment to test true human-like intelligence would be to pass the machine a string of random words, and ask it to construct a meaningful paragraph from them. Humans can do this without a problem, this is as simple as the word magnets attached to the refrigerator.

It seems that Alice's responses are all just a reflection of a human's decision of what is appropriate, and what isn't. The machine is not making the actual decision of what to say, it is merely spurting sentences that are pre-existant. I doubt that Alice will have the occasion of 'putting her foot in her mouth.'

Insane people. (2)

FreeLinux (555387) | more than 11 years ago | (#3961133)

This is a perfect example of how truely insane poeple often appear sane and very intelligent.

Throughout his LONG WINDING answers I could not help but feel that I was reading the words of a very intelligent person. He posed, for the most part, stimulating ideas and apparently, well thought out arguments to support those ideas. I had no reason to question his sanity, therefore the thought never crossed my mind. That is, until I got to the question of the Chinese Room. This was a simple, though possibly offensively worded question. It required a relativey simple answer but, based on previous answers I was ready for a big one.

I was not ready, however, for the rambling tirade of a lunatic. The man is in fact DEEPLY disturbed and may indeed be a threat to his "friend of 20 years", Goldberg. I personaly would be very uncomfortable, if not fearful, had that answer been a face to face encounter. I find the whole thing rather sad.

In the end, my conclusion is that A.L.I.C.E. is NOTHING more than she has always appeared to be. A mimic just as ELIZA was. And Wallace.... the man is really quite insane!

Re:Insane people. (0)

Anonymous Coward | more than 11 years ago | (#3961373)

Most on-target post I've read all day.

Re:Insane people. (1)

alicebotmaster (134416) | more than 11 years ago | (#3961502)

> lunatic
> DEEPLY disturbed
> a threat
> really quite insane

My goodness! Perhaps I should seek a restraining order against Mr. FreeLinux!

Dr. Rich

Re:Insane people. (0)

Anonymous Coward | more than 11 years ago | (#3961750)

He does a good job off coming off as a troll without his very own impersonator.

is it just me (0)

Anonymous Coward | more than 11 years ago | (#3961198)

Or does anyone else hear the faint sound of a giant bong-rip between each of the good Dr's answers.

Bots and unemployed call center workers (1)

rolofft (256054) | more than 11 years ago | (#3961507)

"The most serious social problem I can realistically imagine being created by the adoption of natural language technology is unemployment. The concept that AI will put call centers out of business is not far-fetched. Many more people in service professions could potentially be automated out of a job by chat robots."
The hunter-gather economy guaranteed full employment - automation raises the standard of living. Isn't "eliminating jobs" the best result of AI: automating tasks so people don't have to do tedious work like manning call centers? Worrying about automated call centers causing unemployment doesn't seem any more reasonable to me than worrying that Thomas Edison hurt the candlemaking business.

So cool... (0)

Anonymous Coward | more than 11 years ago | (#3961651)

I read the part about 4:20 at 4:20... Gotta love that :)

Restraining Order (2)

esme (17526) | more than 11 years ago | (#3961696)

The comment about a little guy on disability being pushed around by the great and powerful University of California really struck me. Not because of the power of UC (ObDisclaimer: I work for UC San Diego). I'm aware of their power: I got pulled over by the campus cops one day for not coming to a complete stop at a stop sign. The first thing I did was go home and check out their legal authority to stop me, etc. It turns out they've got the full powers of any municipal police force. Lets take it as a given that any organization that has its own police force is very powerful.

What struck me was the fact that in Goldberg's position, I too might be afraid. From the point of view of a tenured professor, a mentally ill person on disability has nothing to lose. That is the most dangerous person in the world. And since he's obviously taken an interest in Goldberg, and is sending him what he calls "pathetic emails", I can assume that Goldberg might be a little weireded out by a lot of attention, rambling emails (if the interview responses are any indication), etc. from him.

Add to that the declaration that, while he used to regard all violence as unjustified, he now sees how some violence is understandable. In the context of the other stuff that was going on, I could see how somebody could get scared.

Obviously, compared to "Move your fucking car or I'm going to kill you", it's hardly a threat that inspires dread. But it still looks like it could be interpreted as a threat by a reasonable person.

-Esme

turing test and benchmarking (0)

Anonymous Coward | more than 11 years ago | (#3961698)

this whole chatterbot thing reminds me of the usual issues involved in heavy benchmarking (eg. SPEC, quake framerates or here the turing test). if a testing procedure becomes too popular, people spend too much time passing the particular test as good as possible (think ati drivers) instead of working on the general problem the test was trying to model.
alice seems to be a good solution to the loebner assignment, but fails at most other tests or definitions of AI that come to my mind. I'd rather spend my time on something less impressive to look at but involving more creativity than hand coding a massive database of patterns.

chris

Signature (1)

Hank Chinaski (257573) | more than 11 years ago | (#3961715)

He should form his signature properly.
so not:

By order of the Superior Court of the State of California 6/28/02,
do not forward this email or its contents to Kenneth Y. Goldberg


but
--
By order of the Superior Court of the State of California 6/28/02,
do not forward this email or its contents to Kenneth Y. Goldberg

A.I. and Soap Operas (0)

Anonymous Coward | more than 11 years ago | (#3961753)


The objection that A.L.I.C.E. is a "poor A.I." is like saying that soap operas are poor drama. This may be true in some academic literary criticism sense. But it is certainly not true for all of the people who make their living producing and selling soap operas.
I'm not 100% sure how this is supposed to be interpreted. Does this imply that because people are investing their time and energy in 'poor A.I.' and making a living from it and the masses are suckered by it, then maybe its not so poor afterall?

Or, that its the uneducated many and not the ivory-tower few that count? (which would be ironic given the Dr's distaste for politics).

Not trying to be pedantic - its obviously only 2 sentences from a hell of a lot (!) of sentences.. it just seemed like an odd comparison.

Interesting interview all up - got weird/scary in places (e.g. near the end), and didn't answer the question at times (quite politician-ish in fact, which again is ironic).. but better than a dozen RocketMan interviews any day (and at least as long).

Load More Comments
Slashdot Account

Need an Account?

Forgot your password?

Don't worry, we never post anything without your permission.

Submission Text Formatting Tips

We support a small subset of HTML, namely these tags:

  • b
  • i
  • p
  • br
  • a
  • ol
  • ul
  • li
  • dl
  • dt
  • dd
  • em
  • strong
  • tt
  • blockquote
  • div
  • quote
  • ecode

"ecode" can be used for code snippets, for example:

<ecode>    while(1) { do_something(); } </ecode>
Create a Slashdot Account

Loading...