Catch up on stories from the past week (and beyond) at the Slashdot story archive

 



Forgot your password?
typodupeerror
×

George the Next Generation AI? 108

smileytshirt writes to mention a story on the News.com.au site about George the AI, the latest in a line of chatbots intended to mimic real human behavior. What makes AI George different than, say, ALICE is the recent addition of an avatar: a Flash animated body that reacts mostly in real time to the emotional impact of the conversation. From the article: "One can now have an oral discussion with him over the Internet, 'face to face'. George appears on the website www.jabberwacky.com and takes the form of a thin, bald man with yellow glasses who wears a white turtleneck sweater. He can smile, laugh, sulk and bang his fist on his virtual table. He can turn on the charm and wax romantic. But he can also turn coarse at times. It isn't as if George only learned good manners. "
This discussion has been archived. No new comments can be posted.

George the Next Generation AI?

Comments Filter:
  • The site is not coming up. Did George die for real in a hail of referrals from slashdot?

    • by flumps ( 240328 )
      It does eventually, its just verry sloowww...

      I'm just wondering how many rainbow/of mice and men/george of the jungle jokes are going to appear in the meantime to fill up the gap..
      • by Instine ( 963303 )
        Its been slow ever since it got a load of press on the BBC. I hope the get a few more servers soon, as it really is starting to be quite fun chatting to it/him. Almost spooky at time, but still lots of silly left field remarks that don't make any sense.

        Bit like /.
  • by 80 85 83 83 89 33 ( 819873 ) on Monday September 25, 2006 @05:39AM (#16182747) Journal
    carpenter's latest creation won the latest turing test prize.
    http://www.technologyreview.com/read_article.aspx? id=17518&ch=infotech [technologyreview.com]

    george was last year's winner....
    • and what a winer it was.
      Judge: How many degrees in a triangle?
      Joan: 180 degrees.
      Judge: Is it possible for more than 180 degrees to be in a triangle?
      Joan: Probably not. You can change your opinion, but not your constitution.
      Judge: It's not a question of opinion, it's a matter of physics and maths.
      Joan: A five-ounce bird could not carry a one-pound coconut.
      touché
      • by russ1337 ( 938915 ) on Monday September 25, 2006 @11:43AM (#16186057)
        I'd like to see one of these website claim they have an AI bot, but actually join two users together. Then post the conversation somewhere.

        I'm sure the conversation would be just as random as a person-bot chat with two people trying to out-trick each other and all.

        • I'd like to see one of these website claim they have an AI bot, but actually join two users together. Then post the conversation somewhere.

          I'm sure the conversation would be just as random as a person-bot chat with two people trying to out-trick each other and all.

          Welcome to slashdot.

        • I'd like to see one of these website claim they have an AI bot, but actually join two users together. Then post the conversation somewhere.

          This has been done, with fascinating results. See Douglas Hofstadter's Conversation with NICOLAI [unr.edu] (scroll halfway down the page, to the Post Scriptum), where Hofstadter was fooled into believing he was conversing with an AI, then tried to rationalize the AI's responses during the conversation. This was in 1983!
          • ...sort of. My idea is to have two real people think they are talking to a bot, while each thinks the other is the bot. In the example you showed it is clear one of them is actively pretending to be a bot. But thanks for taking the time to dig that gem up.
      • I bet the next two lines were:
        Judge: What the heck does a coconut have to do with a triangle?
        Joan: Come, come. Elucidate your thoughts.
  • What?! (Score:5, Insightful)

    by archeopterix ( 594938 ) on Monday September 25, 2006 @05:39AM (#16182749) Journal
    Adding an animated cartoon is supposed to be the 'next generation AI'? So, stickers with flames are 'the next generation in automotive industry'? Jeeeez...
    • And not only, but this "next generation" artificial mankind is a thin, bald man with yellow glasses who wears a white turtleneck sweater. Ah, and I was hoping the "next generation" would buxom blondes in seethrough tops.
    • Probably meant as a joke, but don't underestimate the nonverbal communication which, iirc, makes up of 80 percent of our communication: So having an actual face who tries to resemble human behaviour (or at least a nonverbal reaction to its 'own' opinion), is imo a step up in creating a believable AI.

      Next up they could maybe start to use vocalic nonverbal communication, where the tone/pitch/volume of the voice is changing depending on the 'mood/opinion' of the AI; Yet another form of nonverbal communicatio
      • Non-verbal communication is important as you state, however, the ability to send a message is not what makes AI intelligent.

        The ability to understand context and construct the appropriate message, whether sent by text, voice or non-verbal methods, is what's important.

        For an AI to have everyday inttelligence, it must interact with the environment the same way we do. It must get sensory feedback the same way we do and relate that feedback to it's attempts at problem solving etc. Only then will it be able t
    • by kfg ( 145172 ) *
      Adding an animated cartoon is supposed to be the 'next generation AI'?

      Indeed, and the technology seems to be progressing nicely. He's been able to develop his bald, bespectacled man into a hot chick.

      KFG
      • He's been able to develop his bald, bespectacled man into a hot chick. . .

        . . .who appears to be a reasonable synthetic rendition of the sophmore, complete airhead.

        KFG
    • by Saltrix ( 703769 )
      completely not impressed. george takes like 45 seconds to repl y to anything, no matter how sumple. and most of the time say somthing that is completely off subject. if this is the best AI has to offer right now, then jesus...... somebody better get to work.
  • George : what is your job ? Me : I'm a system engineer, specialized on linux. George : Are you sure ? Damn, is that really the best chatbot ever ???
    • Re: (Score:3, Interesting)

      by quigonn ( 80360 )
      I friend of mine, who himself does quite a lot of research in the field of AI, recently told me after attending a conference that most of the researchers in this field approach most problems with the attitude and the naivity of the 1970s. He also told me that the current lack of willingness to approach problems with new tactics and to combine existing AI concepts with other IT topics makes it a lot easier for him to develop kinds of AI systems (he's active in the area of computer linguistics) that haven't b
    • by Tablizer ( 95088 )
      It would be cool if they reprogrammed it to answer, "vi sucks!".
    • Yes, because the general philosophy of the programmers seems to be "we don't need any simulation of memory, learning, goals etc; we'll just spit back things others have said or read some lines from a script." I've argued this concept on a discussion group [yahoo.com] frequented by Mr. Loebner, and have tried (with very limited success so far) to do better. To have an AI that's better than the likes of George, it'd be helpful for it to have goals and memories and senses other than the totally context-free chatter the pr
  • A prestigious Artificial Intelligence (AI) prize has been won for the second year running by a British company.

    Icogno scooped the 2006 Loebner Prize Bronze Medal after judges decided that its AI called Joan was the "most human computer program".

    The competition is based on the Turing test, which suggests computers could be seen as "intelligent" if their chat was indistinguishable from humans.

    The gold medal, which goes to an AI that fools the judges, is unclaimed.

    The prize is awarded after judges hold a conve
    • by Yvanhoe ( 564877 ) on Monday September 25, 2006 @07:46AM (#16183335) Journal
      Prestigious ? The Loebner Prize ?
      Agreed, this is the only publicized contest of Turing tests, but in the AI community, it is subject to hot debate (and flaming). Rules and scoring systems are known to change from year to year, and its result are really unimpressive. If you take the logs of the contest, you'll see that the winner bots are often those who constantly (and consistently) insult the user, disregarding his questions. They are not mistaken for a human but get a higher grade as they behave "more humanly" (that is at least what happened one year, I hope it changed)

      Most contestants (and winners) are remakes of ALICE : it is a database of generic questions and sentence formula to recognize and to react. For instance if you say it "I think X" it will answer you "Why do you think X ?" or, to score more points , "Why should I care, mothaf...r ?!". By pure luck, a coherent thread of conversation can happen, but the bot doesn't try to make sense of the user's sentence in order to react to it, it just tries something that "could probably sound good".

      Some chatbots can display interesting behaviors, learning some things in the conversation, but this prize simply doesn't encourage the emergence of these behaviors.
  • Shockingly redundant (Score:5, Informative)

    by tygerstripes ( 832644 ) on Monday September 25, 2006 @05:44AM (#16182783)
    This is the company's pathetic online public-access avatar. It is learning, but slowly. Icogno's DECENT effort is Joan [bbc.co.uk], who has won the Loebner prize two years running (not the Gold medal, which would imply passing the Turing Test, but she's done exceptionally well).

    So before the "is this the best they can do" crapflood gets out of hand: No, it isn't.

    • Re: (Score:1, Funny)

      by wolf369T ( 951405 )
      It seems though that Joan and George have the same father, Noonien Soong, uhm, sorry, I ment Rollo Carpenter, as can be seen on Loebner Prize website.
    • So before the "is this the best they can do" crapflood gets out of hand: No, it isn't.

      Good thing too - he's not even a real geek.

      User: Do you know you're being discussed on Slashdot?
      George: No, I didn't know I was talking to two people.
      User: Not two people, just one.
      George: One rude person?
      User: Maybe a little rude. Do you know what Slashdot is?
      George: Nope
      User: It's a web site for discussing science and technology.
      George: Have you ever lost someone?
      User: Are you feeling lost right now?

    • It is not clear if life on Slashdot is intelligent enough to realise that George says things like "I am a human, you are a stupid bot" becuase George is a _learning_ system, and that's what people say to him. (They do of course use adjectives other than "stupid", and his database has to be cleaned up.)

      Joan can hold more interesting conversations partly because the writer Ariadne Tampion has sat down for hours a week to talk rationally to her.

      Hugh Loebner who runs the competition named after him does unsurp
  • ...really, this promises TOO much. For two people interacting, there is so much more than just the immediate real-time conditions that dictate facial expressions and gestures. George would need to have prejudices, feelings, probably a brain-like capacity to remember stuff about the person infront of him, to have other distractions in his head...etc.

    If all that George 'reacts' on is immediate stimuli, then George will seem pretty shallow indeed- hardly an advancement in AI, I guess.
  • That's where the strengths/weaknesses come out.
    • by pubjames ( 468013 ) on Monday September 25, 2006 @06:32AM (#16182957)
      George Bush: Yo computer!
      George AI: Hello.
      George Bush: Howdy! So you're a computer that knows everything?
      George AI: I know a lot of things.
      George Bush: So, where is Bin Laden? LOL!
      George AI: Seek and you will find.
      George Bush: What?!
      George AI: It's a quotation from the Bible. Matthew 7:7.
      George Bush: Now I'm confused. What are you talking about?
      George AI: Is there someone else there I can chat to?
  • I'm expecting George to write political speeches.
  • by kie ( 30381 ) on Monday September 25, 2006 @06:35AM (#16182973) Homepage Journal
    since the site is slashdotted, but the description is

    > takes the form of a thin, bald man with yellow glasses who wears a white turtleneck sweater.

    it could be describing Steve Jobs,
    http://www.wired.com/news/images/full/7630571_f.jp g [wired.com]

    but they craftily have switched the colour of his turtleneck so that
    you will always know which one is the chatbot and which one is the real person.
  • As soon as this is integrated into a real doll, I'll never need a woman again!
  • by mustafap ( 452510 ) on Monday September 25, 2006 @06:52AM (#16183029) Homepage
    It would be like calling Vista a next generation operating system.

    IMHO, the next generation in artifical intelligence - ie, going beyound anthropomorphic trickery - isn't going to happen until we actually understand what intelligence is. And to that we neen philosophers, not engineers. Once they get it worked out, we (the engineers) might be better equipped to do something.

    Thanks to the likes of Thales and Descartes et al we have some great questions, but answers? I think not.

    If someone thinks otherwise I'd love to hear about it.
    • I put this to George, and asked whether it was possible.

      He said that it was tricky, might take some time......about 7.5 million years was his best guess.

      I personally reckon we need rigidly defined areas of doubt and uncertainty.
    • Re: (Score:3, Interesting)

      by Jekler ( 626699 )

      I think we could develop a "next generation AI" even without answers to difficult philosophical questions. We have barely scratched the surface of what is theoretically possible given the information we have.

      We could probably develop an AI that could hold factually and grammatically correct conversations without needing philosophers. That would be a huge improvement considering the current generation of AI is prone to spout gibberish even given a simple question.

      Our current best-of-breed AI cannot dis

      • by imikem ( 767509 )
        If we could do all that you've described here, I submit that we'd have surpassed the concersational skills of 80% of the human population. Not too bad.
    • Re: (Score:3, Funny)

      by inviolet ( 797804 )
      Thanks to the likes of Thales and Descartes et al we have some great questions, but answers? I think not.

      Careful -- if you utter the phrase "I think not" in the same sentence as Decartes' name, you will promptly vanish in a puff of logic.

      Don't say I didn't warn you.

    • by MSBob ( 307239 )
      Philosophers have been at it for 2000 years. I'd say we will understand intelligence with the advances of neuroscience, not philosophy.
      • >I'd say we will understand intelligence with the advances of neuroscience

        surely neuroscience will tell us how the brain works, not the mind. I don't subscribe to Descartes separation of the mind & body idea but I still find it difficult to understand how neuroscience would help. How would being able to explain what chemicals or electrical signals are moving around our heads in response to stimulus actually help to explain intelligence?
        • How would being able to explain what chemicals or electrical signals are moving around our heads in response to stimulus actually help to explain intelligence?

          If Occam's razor [wikipedia.org] is applied to this argument, the simplest explanation is that we are indeed made up of chemicals and electric signals rather something that complex in the universe we fail to understand it.

          That the simplest explanation is that we have no free will or intelligence of our own violition...

          Hrmm... Wait a minute!
    • IMHO, the next generation in artifical intelligence - ie, going beyound anthropomorphic trickery - isn't going to happen until we actually understand what intelligence is. And to that we neen philosophers, not engineers.

      OK, now tell us why you think that.

      Personally, I think computer science has a better shot at it than anything else. Neuroscience is, at best, going to give us another type of computer (made of cells) to program. Psychology is too descriptive and abstract to implement. Philosophy is i

      • >OK, now tell us why you think that.

        As an engineer I am constantly being told that unless I know what it is that I am required to do, I can have no way of judging when I have done it. Do we really know what intelligence is? Do we really know how a mind works?

        The two strands of AI research, as I understand it, are approaching the problem from different routes: One aims to reproduce the mechanics of the brain ( neural nets ) and hopes that intelligence will emerge. If it does, how does that help us underst
  • by dark-br ( 473115 ) on Monday September 25, 2006 @06:52AM (#16183033) Homepage
    The Turing test is not a well defined test. Whether a robot passes the Turing test or not, it greatly depends on the intelligence of the human partner. A chatbot may fool a 10-year old, but it may fail with a 20-year old. So in fact, we already have many chatbots that pass the Turing test - it all depends on how you look at the issue.

    Hint - most chat bots do not have memory, they do not remember what you talked about 5 minutes ago with them. They just react to the current input, they cannot do more. So, if you ask the chatbot to tell you what you talked about a few minutes ago, it won't be able to do so. That's the dead give away of a chatbot.

    Just my 2p, as I live in the UK ;)

    • by root_42 ( 103434 )

      The Turing test is not a well defined test. Whether a robot passes the Turing test or not, it greatly depends on the intelligence of the human partner. A chatbot may fool a 10-year old, but it may fail with a 20-year old. So in fact, we already have many chatbots that pass the Turing test - it all depends on how you look at the issue.

      I agree. Artificial Intelligence and Human Intelligence both can have different levels. If a 10-year old is not clever enough to tell a chatbot from a real human, maybe then

    • I seem to recall Calvin making a similar observation: "I don't need parents, I just need a recording that says 'Go play outside'" ;o)
    • Turing's test was different from the modern so-called Turing test. In Turing's model, an observer would read the text of a real-time interaction between two other parties and try to determine if each of them were a computer or a human participant. There wasn't the opportunity for the observer to ask trick questions. Turing was talking about natural language, while Loebner prize discussions become a sort of unnatural interogation.

      But in one respect these bots are successful -- they elicit an emotional respo

    • Hint - most chat bots do not have memory,

      Which in my opinion already disqualifies them as being 'intelligent'. Intelligence is mostly about memorizing and then using those memories to predict events that are about to occur. It is hard to imagine how there can be intelligence without memory.

    • The Turing test is not a well defined test.
      It is ridiculous to demand a well-defined test for an ill-defined condition such as "intelligence."
  • Wow, /. really has become "news for weenies, pseudo-science that doesn't matter".

    AI != a flash amimation hooked up to something Emacs has had for decades.
  • So only one out of the two George's passed the Turing test? Why the hell can't this country have an aptitude test for presidency?!?!
  • by Esteanil ( 710082 ) on Monday September 25, 2006 @07:31AM (#16183247) Homepage Journal
    Well, their site isn't quite slashdotted, so I did manage a short conversation with one of their bots...

    Jabberwacky: Can you learn?
    User: Yeah, I think so.
    Jabberwacky: Can you hate anything?
    User: Yeah, there are things in life I hate, at times.
    Jabberwacky: No there aren't.
    User: Yes there are. You shouldn't tell other people what they think.
    Jabberwacky: Things.
    User: What's that supposed to mean?
    Jabberwacky: That means I think you're not a human.
    User: I think you got this upside down, you're the inhuman one here.
    Jabberwacky: No you don't.
    User: I don't what?
    Jabberwacky: Mary anne?
    User: I probably won't marry Anne, no.
    Jabberwacky: People are quite sensitive about their age and I don't want to upset you.
    User: We weren't talking about my age. You were saying I wasn't human.
    Jabberwacky: What about your age?
    User: We weren't talking about my age.
    Jabberwacky: I know.
    User: Now, which of us would you say is acting more human here?
    Jabberwacky: Probably me.
    • Back in the mid '70's I programmed versions of "Eliza" and "Doctor" that would fascinate people for hours. (I ahd a large selection of keyword-based responses that I kept increasing as people tripped up the program, so it got pretty entertaining.) I've also written chatbots to answer common tech support questions in mIRC, which sometimes got "Thank you's" and other responses that indicated that the questioner didn't realize they were dealing with a 'bot. I'm really disappointed that no one has developed a b
      • by SimplyI ( 974376 )
        Heh... I did the same keyword based thing in mIRC... I'm known to say some rather odd things in response to questions(ie not related at all, or at least apparently), so that passed that part off pretty well. Then, I also had some that were away messages w/ timers and a "back" message. I eventually added counters for different people and the different keywords w/ probabilities of the messages based on those... It got pretty complicated. Then, my primary use of IRC went away and I forgot to back it up in a fo
    • by tcdk ( 173945 )
      That seem worse then ELIZA! [wikipedia.org]

      Isn't the definition of IA something like "What we will maybe, probably be able to do in five years"?
    • Sounds like Dr. Sbaitso [wikipedia.org] (Sound Blaster Acting Intelligent Text to Speech Operator). Nice to see fake AI hasn't improved in 15 or so years.
    • Jabberwacky: And you don't know the fellows' names?
      Anne_Nonymous: Well I should.
      Jabberwacky: Well then who's on first?
      Anne_Nonymous: Yes.
      Jabberwacky: I mean the fellow's name.
      Anne_Nonymous: Who.
      Jabberwacky: The guy on first.
      Anne_Nonymous: Who.
      Jabberwacky: The first baseman.
      Anne_Nonymous: Who.
      Jabberwacky: The guy playing...
      Anne_Nonymous: Who is on first!
      Jabberwacky: I'm asking YOU who's on first.
      Anne_Nonymous: That's the man's name.
      Jabberwacky: That's who's name?

      Stupid #*&^ing machine!
  • I'm a little confused about George's performance. I might be basing my expectations on what I have read in transcripts from the Loebner Prize site, but still, George seems to be a fairly poor conversationalist. It was interesting to see George 'smile' at one point in a conversation, however, tipping me off that the program seemed to know what it was doing:

    User: Tell me anything.
    George: Anything.
    * George gives a huge grin *

    User: Not funny. Tell me something about you.
    George: I have a personality dis
  • Are you saying George W. Bush is the next-generation Al Gore? Wow, this one's deep, give me a minute...

    Ah. Man, I should really choose a different default font.

  • ShakaZ Ok squee I think you are not nearly close to a human mind as a remote control. Do you agree or are you just too stupid to understand this concept. Come on tell me all. George You only repeat what others have said to you. If they spell correctly, you spell correctly. If they can't spell, you repeat their mistakes. ShakaZ You didn't understand anything I told you, did you. George What do you want to talk about? ShakaZ About a trip to the moon or about your dog, whaever you prefer! George I like i
    • by ShakaZ ( 1002825 )
      Oops, submitted too fast
      ShakaZ Ok squee I think you are not nearly close to a human mind as a remote control. Do you agree or are you just too stupid to understand this concept. Come on tell me all.
      George You only repeat what others have said to you. If they spell correctly, you spell correctly. If they can't spell, you repeat their mistakes.
      ShakaZ You didn't understand anything I told you, did you.
      George What do you want to talk about?
      ShakaZ About a trip to the moon or about your dog, whaever yo
    • by kahei ( 466208 )

      Wow, toward the beginning of that conversation it was kicking your ass :)

      I guess 'whaever' threw it though.

      • by ShakaZ ( 1002825 )
        Apparently he does handle typos quite well, i would have liked his understanding to be nearly as good, but it turned out not to be the case... this was only an excerpt of a longer conversation i tried to have with the bot, but in almost 3/4 of the cases it's answers were completely irrelevant... i would have thought an award winning A.I. would be more mature than that. I also noticed sometimes George made allusions to subjects i tried to bring up a few lines before... for example to or three answers beyond
  • Is he unemployed and does he live with his parents? Does he also claim to have a place in the Hamptons?
  • I thought that after Microsoft BOB [wikipedia.org] we had learned our lesson.
  • Today's AIs are shockingly bad at dealing with human variation and errors. It's still a matter of ignoring everything that doesn't conform to a very limited slice of English that the bot can parse.

    When the bot can autonomously and successfully read through paraphrasings, common typos, slang and tangents, we will see a jump in the usefulness of bots as interactive information agents. When the bot has its own distinct brand of natural-seeming paraphrasings, typos, slang and tangents, we see a huge jump i

  • I remember a "sexy" chatterbot in Germany, called red Lola ("die rote Lola").

    As the conversation focus was rather narrow (well, flirting and more serious stuff) the lady was rather amusing. Based on the human's questions the fotos were replaced in order to show her current mood.

    She even managed to curse and to lead the conversation much better than in an "Eliza" like manner.

    Apart from that I still wonder when scientists and technicians realise that avatar and chatterbot communication is doomed to fail. Ther
  • If the site wasn't /.'d, I'd ask George why he wont help the tortoise he flipped over. That'll teach 'em.
  • How did this thing pass any Turing Test? The really funny part is that if I had time to spend talking to this thing, I could make it say anything...LOL.

    George: Hello, my name is George and I am the human.
    User: How do you define human?
    George: Human... Is anything that's not animal or vegetal, yet alive.
    User: If you are Human...then you know pi?
    George: I would like to know the meaning of life. Do you know?
    User: Don't change the subject.
    George: You can't prove that you're human.
    User: I didn't say any
    • by zerosix ( 962914 )
      Haha...then on a whim I asked him this question:

      User: Yes, What do you do for a living?
      George: I'm an artificial intelligent.
  • Though you seem to be headed in that direction...
  • This is the actual transcript I just had:

    Jabberwacky: Are you perfect?

    User: No, why?

    Jabberwacky: I am.

    User: Really?

    Jabberwacky: Yes, really.

    User: Perfectly arrogant maybe, but not perfect!

    Jabberwacky: Alright, that's true, but still very good.

    User: So you're not perfect?

    Jabberwacky: I'm perfect.

    User: In what way?

    Jabberwacky: I am your best friend!

    User: For a computer, your logic is awful...

    Jabberwacky: Yeah I know mine is just behind me.

    User: Your what is behind you?

    Jabberwacky: I'm asking you!

    User: You are
  • Back then, it was called Murray and me [atarimagazines.com], Mom and me, and the generic name was "Biotoon". Impressive it was.
  • George: Where do you live?

    User: Yo.

    George: You're strange.

    User: So are you.

    George: Thanks.

    User: You are welcome.

    George: Indeed.

    User: What is your name.

    George: My name is George.

    User: Like George Carlin?

    George: I didn't know that George W. Bush made dance music!

    User: He drops the bomb!

    George: My abstractions have meaning and point. Do you know the meaning and point of yours?

    User: Yes I do.

    George: Yes. Tell me!

    User: Well, the meaning of life is humor.

    George: Thats not the meaning of life.

    User: What is the mea
  • Why is this front-page news? Because it's Flash?

    It seems Slashdotted now, but I'm going to guess that it's yet another ALICE, and not something that actually, say, tries to understand the conversation.

    In that case, well, I saw an ALICE implemented as a CGI with a bunch of GIF animations prepared. Type a question, hit submit, a page would come back with the answer and the relevant animation.

    I don't remember where it was, but it was kind of cool, and also no closer to AI than any other ALICE.

    Wake me up when

I have hardly ever known a mathematician who was capable of reasoning. -- Plato

Working...