Beta

Slashdot: News for Nerds

×

Welcome to the Slashdot Beta site -- learn more here. Use the link in the footer or click here to return to the Classic version of Slashdot.

Thank you!

Before you choose to head back to the Classic look of the site, we'd appreciate it if you share your thoughts on the Beta; your feedback is what drives our ongoing development.

Beta is different and we value you taking the time to try it out. Please take a look at the changes we've made in Beta and  learn more about it. Thanks for reading, and for making the site better!

Cyc System Prepares to Take Over World

michael posted more than 13 years ago | from the brittney-cleary-slated-to-be-first-against-the-wall dept.

Programming 329

Scotch Game writes: "The LA Times is running a story about the soon-to-be-even-more-famous Cyc knowledge base that has been created by Cycorp under the leadership of Douglas B. Lenat (bio here). It's a pop piece with little technical information, but it does have some enticing bits such as the suggestion that the Cyc system is developing a sense of itself. If you're not familiar with Cycorp and its goals then take a look. Of course, you should realize that this is, in fact, the system that will one day send Arnold Schwarzenegger back in time in order to kill a young pretty lass by the name of Sarah Connor. But for now the system is pre-sentient and pretty cool ..." See also OpenCyc.

cancel ×

329 comments

Knowledge representation (2)

Anonymous Coward | more than 13 years ago | (#133091)

I work in this field, in particular in medical AI.
The number of rules they have is really tiny compared to the number created.
For the person that suggested its a relational database, I doubt it.

Theres several different approaches they could have taken.
1st, they could have listed every possible question in every possible context, and written out every possible reply. An infinite amount, but, it would do the job ;)
2nd, use a relational diagram, which doesn't work for multiple parents :(
3rd, break the sentance in two atomics, and from there list every possible atom etc - still infinite amount and not good

4th, For each rule you have standard logic saying how it is related to another. This is how it is done ( I expect)
The problem from there is how to clasify something.

We use something called Grail as a language. So:
femur is part-of leg
etc.
This is formal and unambigous.
Then on top of that we have intermediate representation, which is ambigous and informal.
A lot of acronyoms have multiple meanings, and so this needs to make a best guess depending on the context etc. See opengalen.org
We have at least 50 full time ppl working on entering the rules, and merge with everyone elses occasionally.

With all these rules etc, it still gets context meaning wrong - and this is specialised.

There's also trouble with things like transitivity etc.

if an eye is part-of head, and head part-of body, then eye is part-of body.

but layer-of is a child of part-of (inherited) but it is not transitive...
and so on.
for every relationship you have to state its properties of transitive with every other.
etc etc.

And its still not inteligent or anything, although i do find it crashes less if I dl shakespear plays on to it, etc. and some joker keeps sending me emails saying i'm alive. and who keeps modifying my source code, argh.

OpenCyc is not Open Source (1)

Bander (2001) | more than 13 years ago | (#133098)

I'm amazed nobody has mentioned that the OpenCyc project is not releasing source code for the inference engine.

Apparently, the part that is "open" is the CycML schema, and I suppose the database that makes up the "common sense" thing that is so ballyhooed.

sigh

Bander
--

Re:Category error (2)

Chris Burke (6130) | more than 13 years ago | (#133103)

But remember that quantum mechanics has shown us that the universe is in fact digital, and only appears analog at a higher level.

Re:Category error (3)

Chris Burke (6130) | more than 13 years ago | (#133110)

And very good ones at that, which demonstrate the underlying principles of Turing machines, and show how they cannot produce semantic understanding, merely syntactical manipulation of data.

They really only suggest that Turing machines can't produce semantic understanding. I mean, it takes more than mere arguments to be a proof, particularly in the mathematical world that surrounds Turing machines.

Bzzzt! Wrong... the Turing test says nothing about whether something is intelligent, merely whether it can fool a person. Blind adherence to rules is not intelligence.

Well, how do you define intelligence then? If you can't tell by observing behavior, how do you decide? Is something only intelligent if it operates exactly like a human brain? Why does the operation make a difference?

Now here's your category error. You are assuming that the brain is also a Turing machine and that by some miracle of "emergent behaviour" intelligence arises. But that's obviously not true, as Searle showed, because Turing machines cannot be intelligent!

You're arguing that we aren't Turing machines because we are intelligent and Turing machines can't be. But there is no actual proof of that. And it is not obvious otherwise that we aren't Turing machines.

Consider this: Imagine a computer, no different from your desktop only insanely more powerfull and with effectively unlimited memory. On this computer is running a simulation of a human brain, accurate to the limits of our knowledge of physics. Every quark (or every string, if you prefer) is perfectly simulated on this machine.

Is the machine, which is a Turing machine, intelligent?

If your answer is no, then I ask what is it that occurs in the human brain that isn't occuring in the machine?

No problemo... (3)

Psiren (6145) | more than 13 years ago | (#133111)

Of course, you should realize that this is, in fact, the system that will one day send Arnold Schwarzenegger back in time in order to kill a young pretty lass by the name of Sarah Connor.

Use that old fashioned off switch before it gets up to any dirty tricks. It does have an off switch, right? Even Data has an off switch... ;-)

Re:Cyc? What's that got to do with AI? (5)

JanneM (7445) | more than 13 years ago | (#133113)

While I agree that Cyc isn't the future of intelligent computing, I have to disagree with you on another point.

Searle has _not_ proved anything of the sort. he argues for his position fairly well, but on closer inspection thay are just arguments, not any kind of proof. For a good rebuttal, read Dennet for instance.

For those that haven't heard about it, It's the 'chinese room' thought experiment, where a room contains a person and a large set of rulebooks. A story - written in chinese - and a set of questions regarding the story is put into the room. The person then goes about transforming the chinese characters according to the rules, then outputs the resulting sequence - which turns out to be lucid answers to the questions about the story. This is supposed to prove that computers cannot think, as it is 'obvious' that humans work nothing like this. Problem is, it isn't at all obvious that we do not work like this (no, not rulebooks in the head, or even explicitly formulated rules, that's not needed for the analogy).

You want to know more, I can heartily recommend a semester of philosophy of the mind!

/Janne

Re:Some interesting things about CYC (2)

PD (9577) | more than 13 years ago | (#133117)

re: 5th generation almost a complete bust

This is true for the software side of the 5th gen system, but the major concept for the hardware side, massively parallel supercomputers, is still very much with us. I can remember my high school computer teacher telling us that computers of the future will have multiple processors, and that programming those machines was harder than programming the TRS-80's we had back then. The reason he was telling us all that was because he was reading quite a bit about the 5th gen project in Japan. Turned out he was right.

Shrewdness (3)

ch-chuck (9622) | more than 13 years ago | (#133120)

Cyc already exhibits a level of shrewdness well beyond that of, say, your average computer running Windows.

Now if they could only come up with something more shrewd, devious, conniving, underhanded & backstabbing than the CREATORS of your average computer running Windows®

I thought this project had died... (2)

rnturn (11092) | more than 13 years ago | (#133121)

I'd seen interviews with Lenat and seen stories about his AI work, oh, must have been at least ten to fifteen years ago. I figured that the work had ended. Talk about your perserverence!

Let's just hope that the Russians haven't created their own Cyc project. If the two ever find each other on the Internet and talk to each other...


--

someone there has a sense of humor (2)

jonbrewer (11894) | more than 13 years ago | (#133124)

Read the third point, from the overview [cyc.com] on their website.

- Cyc can notice if an annual salary and an hourly salary are inadvertently being added together in a spreadsheet.

- Cyc can combine information from multiple databases to guess which physicians in practice together had been classmates in medical school.

- When someone searches for "Bolivia" on the Web, Cyc knows not to offer a follow-up question like "Where can I get free Bolivia online?"

Some interesting things about CYC (5)

peter303 (12292) | more than 13 years ago | (#133125)

(1) CYC is one of the few survivors of the "A.I." speculative bubble of the mid-1980s. Though this bubble was not as large as the recent InterNet bubble, there was a lot of hype. The US computer industry feared it would lose the "A.I. war" against Japan's "Fifth Generation Project". This project was going to build an intelligent supercomputer using expert systems. It was almost a complete bust.

(2) A major contention behind CYC is that so-called "expert systems" will be useful once they pass a certain level of critical knowledge, particulary incorporating trivia called "common sense". Most early expert systems were very small and narrow, with just a few hundred or thousand pieces of knowledge. They frequently broke. CYC is a thousand times large than most other expert systems with a couple million chunks of knowledge.

(3) One of the more interesting parts of CYC is its "ontology". You could think of it is a giant thesarus for computerized reasoning. What is the best way of doing this? Previous examples are the philosophers' systems of categories descended from Aristotle and the linguists' meaning dictionaries called thesarii. CYC uses neither of these because they are not useful for computerized reasoning. It developed its own exlucidating hidden human assumptions of space, and time, and object, and so-on. The CYC ontology is publically available on the net at the cyc web site [cyc.com] . The ontology is much more sophisticated than a mere web of ideas (called semantic net in A.I. jargon). It has a web, it has declarative parts like Marvin Minky's frames. It has procedural parts, or little embedded programs for resolving holes and contradictions. Again this is on the web site.

Re:I see a good IM use for this (1)

ethereal (13958) | more than 13 years ago | (#133130)

Eliza?

Caution: contents may be quarrelsome and meticulous!

Re:What a horrific concept... (2)

ethereal (13958) | more than 13 years ago | (#133131)

That site was wrong in so many ways, but I wouldn't worry too much about kids coming across it. It takes several minutes of concentrated effort to be able to spell "Schwarzenegger", after all.

But wait, then how did a /. editor ever get it worked out? :)

Caution: contents may be quarrelsome and meticulous!

Re:Category error (2)

ethereal (13958) | more than 13 years ago | (#133132)

I think you could argue this one in circles for hours, but here's a thought for you: can you prove that you are actually "intelligent" and not just a sufficiently-complex system of rules and syntactic manipulation? Maybe you just appear to be intelligent, but are not, like the Turing machines you describe. This isn't a slight at you; I'm probably constructed the same way.

It seems to me that the Turing test is still relevant - if you can fool a person into treating you as an intelligent being over an extended period of time, then by what right is the complete outward evidence of intelligence not intelligence? A difference which makes no difference is no difference (thank you, Mr. Spock) - if you can't prove that something is not intelligent based on its actions, even though you know how it works and that theoretically it cannot be intelligent, on what basis do you say that in practical terms it is not intelligent? I would say in that case that if the theory does not match the facts, the theory is wrong.

I don't know if it is actually possible to successfully simulate intelligence in any mechanical form. But if it was a successful simulation, and it was impossible to tell the difference between the intelligence of that machine and the intelligence of an average human, then for all intents and purposes the machine is intelligent, no matter how much you swear it ain't.

Caution: contents may be quarrelsome and meticulous!

Re:What a foolish piece of Babel (1)

cale (18062) | more than 13 years ago | (#133137)

Right dude, you might want to keep both your position on abortion and your thoughts on religion to yourself or we are gonna have a huge flame war here.
Intelligence, in my mind, is the ability to make connections between data, hence why even people who don't know much (young kids) can be intelligent, the can synthesize (sp?) conclusions from seemingly random and unconnected pieces of data. Red. Truck. for instance, most kids would probably spit back fire truck or something on those lines.
To have an intelligent system it would need the same data that we get, but also an amazing way to link that data, and even have conflicting data.
While I don't know if cyc is going to achieve true AI having all of that data entered could prove useful to other projects later on.

Re:....What the brain alone could do (1)

nut (19435) | more than 13 years ago | (#133139)

This is the interesting thing about the definition of AI. Someone, such as Alan Turing, comes up with a provable definition of AI, and someone else manages to write software that meets the proof. Of course, as soon as the software is written, we immediately discard the proof as invalid, because, "We understand how it works," and the software is, "Just a set of rules." Eliza broke the Turing test , and Deep Blue beat our best chess player. Meanwhile as psychologists study how our brain works, it begins more and more to look like a computer. One quote I like from psychology, "At first we studied the soul, then we studied the mind, now there is only behaviour."

Artificial Stupidity (2)

AstroJetson (21336) | more than 13 years ago | (#133143)

Let us not forget about these guys [totse.com] .

Re:A fraud (1)

mskfisher (22425) | more than 13 years ago | (#133146)

ooo, i like your .sig.

Re:Category error (2)

iapetus (24050) | more than 13 years ago | (#133147)

Searle's thought experiments are by no means universally accepted as a 'proof' that Turing machines cannot be intelligent. I recall that we spent almost an entire lecture during my Artificial Intelligence MSc course looking at arguments against the Chinese Room argument. There are interesting ideas there, but I rank the idea that they provide definitive proof that Turing machines cannot be intelligent up there with the idea that Kevin Warwick is a cyborg, as did many of the staff.

Re:Depends what you mean by AI... (2)

iapetus (24050) | more than 13 years ago | (#133148)

I doubt the defense department would be so interested in Cyc if it were "just a nice toy". :)

Sure they would. Plenty of things that are nothing more than 'nice toys' for the AI world have practical applications. The question isn't whether it's useful, but whether it's furthering the bounds of what AI can do, and the suggestion is that it isn't.

Re:Depends what you mean by AI... (4)

iapetus (24050) | more than 13 years ago | (#133150)

Artificial Intelligence is defined differently by different people but one widely accepted definition is "The ability for a machine to perform tasks which are normally thought to require intelligence".

Minsky, I believe. The version I heard was "The ability for a machine to perform tasks which if carried out by a human being would be perceived as requiring intelligence."

I also like another definition of AI, as provided by that greatest of scholars, Anonymous: "Artificial Intelligence is the science of making computers that behave like the ones in the movies."

Nice revenue model (2)

dmorin (25609) | more than 13 years ago | (#133153)

Ok, so they're gonna open up about 5% of the knowledge base for free exploration, and license for the rest? I expect the majority of questions put to the free portion will be met with "I can tell ya, but it'll cost ya." :)

Seriously, I expect it will be compared to Ask Jeeves, and thus not taken seriously since the brief surge of natural language engines died out so mysteriously. Personally I think they'll have a better chance if they say "Look. All you corporations out there struggling with your "business rules" database? 90% of those rules are common sense. Cyc will take those off your hands, as well as bringing common sense to the table that you never even considered. That'll free you up to really focus on your business specific issues." The example I can think of is that for every business specific rule I have that says stuff like "If a customer in category X has transacted more than $Y worth of redemptions in a day, then alert a customer representative", I have 10 that say stuff like "If you sold all of your shares of a mutual fund you can't sell any more."

Cyc is mostly hype (1)

scruffy (29773) | more than 13 years ago | (#133157)

Cyc is a knowledge base with a lot of facts in it. It is a product of the 80s when expert systems were going to solve all of AI's problems. The hype factor of Cyc is so large that it is almost impossible to determine what Cyc's capabilities are. No well-defined problem is being solved. All we ever see are these cute question-answer examples. If anyone can point to any coherent evaluation of Cyc (as opposed to all the anecdotes that fill the newspaper stories), we would all appreciate it.

Re:No problemo... (2)

wiredog (43288) | more than 13 years ago | (#133165)

Yeah, but Data was second or third generation. M5, IIRC, didn't have an off switch, or at least not one that was reachable.

Re:Category error (1)

_Quinn (44979) | more than 13 years ago | (#133166)

Yet Searle did not prove that there is more to semantics than syntactic manipulation! How do you propose to distinguish between the 'Chinese room' and a trained translator?

-_Quinn

Re:....What the brain alone could do (1)

greenrd (47933) | more than 13 years ago | (#133167)

Eliza broke the Turing test

I'm sorry, but that is completely false. Yes, certain people did believe that Eliza was a human being - but that wasn't a Turing test, because:

  • It didn't involve comparing Eliza to another human being in the same setting (dubious relevance, I admit, but this is a requirement of the Turing Test)
  • It didn't involve a panel of judges (this is absolutely necessary to avoid setting the bar too low)
  • And, most importantly of all, the people using Eliza were not told to try and reveal whether it was a computer or not! That makes all the difference - to whether the questions are easy or impossibly "hard" for the computer to understand.

The Turing Test has still never been passed. There is a cash prize on offer - the Loebner Prize [loebner.net] - to anyone who writes a program which passes the Turing Test. (A much smaller prize is also given to the "most human-like" software at each Loebner Prize competition) Of course, no-one is anywhere near winning the main prize - including Cyc, which is quite capable of spewing out nonsense when confused.

In my opinion - speaking as a computer scientist in training - the Turing Test is an excellent test of AI.

Re:Let this thing surf the net for info.... (2)

paRcat (50146) | more than 13 years ago | (#133169)

Or if it was sickly perverted:

alt.binaries.pictures.erotica.win32

Re:What a foolish piece of Babel (3)

paRcat (50146) | more than 13 years ago | (#133170)

Here's something quite cool for you...

When my son was born he was strong enough to roll himself over, which isn't typical. When I talked, he rolled and looked at me. A baby, less than 10 minutes after being born, could recognize my voice. Not to mention those that have noticed a baby reacting to a voice while still in the womb. Very cool. He's ten months old now, and it's quite amazing how smart he grows daily.

What I've always wondered is exactly how we could recognize intelligence in a machine. I already knew that my child had the ability to be intelligent because he is human... but will it take a truly amazing act before we acknowledge intelligence in something that "shouldn't" have it?

Re:Kinda behaves like my kids.... (2)

J.Random Hacker (51634) | more than 13 years ago | (#133172)

From what I understand, yes, it did something very much like that for something like 18 months.

There is a very good book on the history of AI written about 5 years ago (maybe longer) that described Cyc, and Lenat's research up leading up to it, along with the contributions of a great many others. Unfortunately, the volume is sitting on my AI/Math/Computer Science bookcase at home, and I can't remember either the title or author :(

Blatant Plug (1)

ghoti (60903) | more than 13 years ago | (#133173)

Exactly. For a discussion of this point, see my Thoughts on AI [kosara.net] (yes, this is a blatant plug).

Re:Cyc? What's that got to do with AI? (5)

Colm@TCD (61960) | more than 13 years ago | (#133174)

What a lot of toss you do talk. With your sarcastic Daddy-knows-best "sorry" and "the fact is". Searle proved no such thing as your assertion; he merely provided a series of thought experiments which force us to think about what intelligence might actually consist of.

If it looks intelligent, and acts intelligent in all conceiveable circumstances, then we'll be forced to conclude that it is intelligent, even if we know what's going on under the hood. Are you suggesting that, should we one day discover the secrets of the emergent behaviour of the human brain (reducing it, therefore, to "a simple rules system"), that we will suddenly cease to be intelligent?

Re:....What the brain alone could do (1)

StrawberryFrog (67065) | more than 13 years ago | (#133179)

Whilst I agree with you genrally, Eliza did not break or pass the Turing test. Eliza can fool someone who is not expecting it for a few minutes with canned responses. It's only conversational tactic is to get the other person to talk - ie to be a good listener. Heck, /dev/nul is a good listener.

They hold contests regularly, pitting the best conversational programs against human judges (sorry... I don't remember any more details) and no programs are even close yet to being able to fool an alert judge. ... human conversation is just far to bread for software .. yet.

For instance, If you say to Eliza
"My dog's name is Spot",
Eliza will reply
"Why do you say that your dog's name is Spot?"

Then a minute later, when you ask it
"What is my dog's name?"
The best that you'll get back is
"Why do you ask?"

i.e. a stock canned response indicating a general lack of comprehension:

....What the brain alone could do (3)

StrawberryFrog (67065) | more than 13 years ago | (#133180)

We finally are getting to the point where machines will be able to do what the human brain alone can do," says James C. Spohrer, chief technical officer of IBM's venture capital relations group, who has studied Cyc's potential as a commercial project. "The time feels right."

The article is good, but this is a poor quote. As others have pointed out, what "the human brain alone can do" is a moving target. Remember when only humans could play world-class chess? prove theorems? Add two numbers together?

"That which makes us human and not just machines" is often defined simply as "the stuff that machiens can't do" ... yet.

Teaching Cyc to "believe"? (1)

jhoffoss (73895) | more than 13 years ago | (#133183)

"Klein spends hours inculcating the system with such abstract concepts as "belief"--a difficult notion for a computer program to grasp, possibly because it has more to do with point of view than with anything true or false about the real world." ...So Cyc doesn't believe in belief? How does that work, and would he explode in a War-Games-like scenario of self-destruction were he posed this question?
---

Re:Category error (1)

jhoffoss (73895) | more than 13 years ago | (#133184)

Hmmm...in respones to being asked "what is the proof that the human [brain] does not operate by 'blind adherence to rules'":
"We simply don't know enough yet to say for definite, but the fact that it has both a changing topology and is analog would indicate it doesn't work in the same manner as a Turing machine,"

Why can a computer or AI (or a rule-set, as you're referring to Cyc) be dynamically changing? In fact, it must be, if it is learning anything. Once the rule-set hits a critical mass, it would theoretically be able to completely remove the need for data-entry of facts and learn new data by interaction with programmers (but we'll see what happens in that respect, with Cyc...)

You say the dynamic/analog nature of the human brain indicates that our brain is more than a rule-set. The fact that you (or anyone else) do not know this for sure works against your argument as much as it does for it. It is very possible that the human brain is just a complex rule-set.

I pose this question, which you skirted a bit in your last posting: What will it take for true AI? If a computer has a huge algorithm to make a certain decision for some arbitrary problem, and it can come to the same solution that I could, why is that not artificially intelligent? How do you define intelligence versus artificial intelligence (theoretically or practically)?
---

What a horrific concept... (3)

selectspec (74651) | more than 13 years ago | (#133185)

I don't know about you guys but I am really scared here. This sort of thing makes us have to ask ourselves fundemental questions about what is right and wrong. Hollywood actors (that aren't chicks getting naked) should not have personal websites. Do we really want our children accidently browsing to Arnold's sight?

Re:Hollywood planted this piece (4)

selectspec (74651) | more than 13 years ago | (#133186)

Kind of like when Jurrasic Park was released, they revived Tony Bennit's career (brought a dinasour back to life).

Whew, dodged a bullet on THAT one... (5)

artemis67 (93453) | more than 13 years ago | (#133192)

"HAL killed the ['2001'] crew because it had been told not to lie to them, but also to lie to them about the mission," he observes. "No one ever told HAL that killing is worse than lying. But we've told Cyc."

But have they told Cyc not to use humans as batteries?

Re:It won't be Arnold Doing the Stopping... (3)

Dr_Cheeks (110261) | more than 13 years ago | (#133211)

Your .sig says: Once upon a time there were two Chinamen. Now look how many there are. [my emphasis]

Surely one of them would have had to be a woman : )

Scary (5)

Dr_Cheeks (110261) | more than 13 years ago | (#133212)

"HAL killed the ['2001'] crew because it had been told not to lie to them, but also to lie to them about the mission," he observes. "No one ever told HAL that killing is worse than lying. But we've told Cyc."

Um, am I the only one creeped out by this? And presumably they've told it all sorts of other moral stuff too, but who gets to decide it's morals? It's all kinda subjective. And how do we know that they've not said anything like "It's worse to let any single one of us die than it is to let any number of other people die" or something (I doubt very much that they'd do that, but I'm just trying to come up with an example and it's Friday afternoon and I'm off for the weekend in 2 hours)?

Ultimately, Cyc isn't actually making decisions, but re-gurgitating what it's been told previously - the people programming it make the decisions. I've formed opinions about a great many things, and some of those opinions contradict what a lot (or all) of my friends and family think, but I reached them myself - Cyc needs to be able to do this before it will be sentient - right now it's just a big, sophisticated database (only a way further along the line than Jeeves).

When it can make worthwhile posts to Slashdot I might look at it again : )

Re:Cyc? What's that got to do with AI? (2)

BMazurek (137285) | more than 13 years ago | (#133236)

His assertion (although I disagree) is more fundamental than Moore's Law. Computational power is irrelevent to his argument. Infinite computer power is not the obstacle he is presenting.

It is like dealing with NP-Complete problems like the Travelling Salesperson Problem. More computer power will make the solution to larger problems more easily attainable, but it does not move the problem from NP (where it is believed to be) to P (where we would like it to be).
---

Re:Category error (2)

BMazurek (137285) | more than 13 years ago | (#133237)

You seem to be making two assertions:

  • you and your girlfriend work in roughly the same way as me
  • the computer works on totally different principles

The first, I grant to you, is probably true (but not certain). The second, I'm far less certain about. You could be right, you could be wrong. I simply do not know.

I'm going to give the computer a hell of a lot more structiny before making such claims.
I think we all will. If such a computer comes along that people claim satisfies the Turing Test, I have a sneaking suspicion that every one of us would love a crack at it. See if we can succeed in knocking it off it's pedestal...
---

Re:Category error (4)

BMazurek (137285) | more than 13 years ago | (#133238)

You are assuming that the brain is also a Turing machine and that by some miracle of "emergent behaviour" intelligence arises. But that's obviously not true, as Searle showed, because Turing machines cannot be intelligent!

I'm not quite sure I follow you, especially in light of this:

Can the operations of the brain be simulated on a digital computer? ... The answer seems to me ... demonstrably `Yes' ... That is, naturally interpreted, the question means: Is there some description of the brain such that under that description you could do a computational simulation of the operations of the brain. But given Church's thesis that anything that can be given a precise enough characterization as a set of steps can be simulated on a digital computer, it follows trivially that the question has an affirmative answer.
Searle - The Rediscovery of the Mind

If you believe a brain and its reactions can be simulated by a computer, why is that not sufficient for intelligence?

Is this belief associated, in any way, to theological beliefs?

Please explain your position, as I am genuinely interested in understanding it.

---

Hollywood planted this piece (4)

Jonathan Blocksom (139314) | more than 13 years ago | (#133241)

The only reason this story is getting printed is because Steven Spielberg's AI movie is coming out soon, and his studio is trying to drum up interest in the subject. Sort of like how stories about the possibility of asteroids hitting the earth were popular several weeks before Armaggedon & Deep Impact came out.

(Not that the company isn't real or working hard on the area, but just take this with a grain of salt...)

Learning for itself (2)

nick255 (139962) | more than 13 years ago | (#133242)

At the moment it seems that all of the knowledge in Cyc has to be entered in by hand, reviewed by a technical board, etc.

IMHO for something like this to really take off, once they have taught it enough to be able to have a basic understanding of the world they should teach it various methods for discerning 'the truth' (mainly how to decide if a source is trustworthy). Then they can give it a fast internet connection, let it browse for awhile, and hopefully if should have learnt alot!

Consider it the equivalent of bringing up children: it is when they can read for themselves they really start learning.

Questionable intelligence (5)

Sir Runcible Spoon (143210) | more than 13 years ago | (#133245)

Asked to comment on the bacterium's toxicity to people, it replied: "I assume you mean people (homo sapiens). The following would not make sense: People Magazine."

Do you know. I have to work with people that talk like this all the time. They like to think they are intelligent too.

Re:Cyc? What's that got to do with AI? (4)

Drone-X (148724) | more than 13 years ago | (#133246)

Surely this work isn't irrelivant. The information stored in it will probably be very useful to the AI field.

I was thinking myself it would be nice to use Cyc to train neural networks. That way you might be able to 'grow' (the beginning of) a real AI. Does this sound feasable?

Setting the bar low... (4)

gilroy (155262) | more than 13 years ago | (#133249)

Blockquoth the article:
Cyc already exhibits a level of shrewdness well beyond that of, say, your average computer running Windows.
But then again, so does a light switch. :)

Re:I'm not so impressed.... (2)

WolfWithoutAClause (162946) | more than 13 years ago | (#133259)

Actually, many of the things a human 'learns' in 17 years it already knew.

Mammals are all are born with a pretty good understanding of basic anatomy (e.g. horses and cats and dogs aren't bad at reading human body language- they know about faces, arms, legs- nobody taught them this stuff either), they all have built in emotions, and some appreciation of interactions between cause and effect, I poke/attack you, you usually go away that kind of thing; often this knowledge is semiunconscious, but its built in.

Computers are the ultimate tabulae rasa. They need this stuff that a few billion years of evolution knocked into our wiring.

As another example it seems pretty likely from the research on language that humans are born with the ability to learn ANY human grammar; we are genetically programmed for languages (if you challenge that, the only real difference between humans and sheep is genetic- lambs cannot learn language).

That's what CYC is. To wildly oversimplify, its a billion years of evolution plus a kindergarten education.

Re:No problemo... (1)

mesusha (184001) | more than 13 years ago | (#133271)

Use that old fashioned off switch before it gets up to any dirty tricks. It does have an off switch, right? Even Data has an off switch... ;-)
Well, no. In the film it was said that "we must destroy all disc drives ..." Which implies that even the disc drives of the future will have more common sense than people.

Searle talked about consciousness (1)

mesusha (184001) | more than 13 years ago | (#133272)

If you are referring to the Chinese Room argument, that was about consciousness. A person can appear to understand Chinese by just using a rule-book to mechanically compose answers to questions in Chinese, and not have a clue about what he is answering, even if he memorized the rule-book. Searle did not 'proof' with this argument that those answer could not be highly intelligent. If you ask me, he also did not prove that the system can not be conscious. You just end up with a schizophrenic with two independent minds that are divided by a language barrier.

Re:Let this thing surf the net for info.... (1)

dynoman7 (188589) | more than 13 years ago | (#133273)

Yeah. Think of it as a big spell checker.


If the only tool you have is a hammer, you tend to see every problem as a nail.

Nothing interesting yet..... (1)

leuk_he (194174) | more than 13 years ago | (#133275)

I'll BE BACK when there is something interesting to be told about psych-corp.

maybe if the Master Control Program tells me something interesting is happening here.

Kinda behaves like my kids.... (4)

SomeoneGotMyNick (200685) | more than 13 years ago | (#133278)

I wonder if in Cyc's early years, instead of being shrewd enough to ensure it knows what you're talking about, it kept asking "Why?, Why?, Why?" to everything you explained to it.

I see a good IM use for this (4)

SomeoneGotMyNick (200685) | more than 13 years ago | (#133279)

Cycorp's 65-member staff engages in a dialogue day and night with their unremittingly curious electronic colleague.

Having trouble making friends online? Can't even find one person who will put you on their buddy list? For an additional $9.95 per month on your AOL account, you can have an artificial Buddy to chat with who's online 24 hours a day. His/Her screen name is Cyc342

Re:Cyc? What's that got to do with AI? (1)

jamescford (205756) | more than 13 years ago | (#133281)

Unfortunately, the Turing test is no indicator of strong AI at all, just a very good rules system. Searle proved years ago that no rules-based system (ie. a Turing machine) can ever be truly intelligent, no matter how much so it seems.

Not everyone [aol.com] believes Searle really "proved" his case with the Chinese Box argument. I haven't read this particular treatment of it, but just the fact that a Ph.D. in philosophy was granted for it would seem to suggest that it's not such an open-and-shut case, huh?

It wouldn't be the first time that something someone "proved" didn't hold up when you look at it a different way. To say that "something can't be truly intelligent no matter how much so it seems" seems to rule out even giving the issue the serious thought it deserves.

Jamie

Discover Magazine (1)

iaamoac (206206) | more than 13 years ago | (#133282)

Just thought I'd add my two cents and say that back in the early 90's (91 or 92 IIRC), Discover magazine ran an article about Cyc. I'd post a link if I could find one.

Iaamoac

Humans are unreasonable (2)

sdo1 (213835) | more than 13 years ago | (#133289)

The first intelligent AI will have to be, at times, unreasonable. If it just spews out facts, then it is indeed just a computer. Humans have an ability to defy logic and do things that simply are not reasonable. For the computer to learn, it will have to be allowed to do this.

-S

Re:Scary (1)

dSV3Hl (215182) | more than 13 years ago | (#133290)

How is programing it to have morals/knowledge/whatever any different then teaching a kid? We all learn things before we have our own opinions, why would an AI be any different?

On the subject of re-gurgitating... 12 years of my life was spent in places where re-gurgitation WAS considered intelligent! :P

BTW, how many worthwhile posts have you read on slashdot anyways? (this one included :)

singinst (2)

zephc (225327) | more than 13 years ago | (#133293)

check out singinst.org [singinst.org] for a good idea of how an AI should be designed
----

Re:Cyc? What's that got to do with AI? (1)

kenthorvath (225950) | more than 13 years ago | (#133294)

Actually it is based upon A.M. which was used to "learn" and deduce many rules of mathematics given just a simple set. This is EXACTLY what humans have done over the years, and I consider myself to be pretty intelligent. (RTFA)

Let this thing surf the net for info.... (2)

kenthorvath (225950) | more than 13 years ago | (#133295)

If Cyc could follow every link and (presumably) decipher every language, all the while incorporating vast amounts of information into its already HUGE database, I would imaging that it would be able to pick out all of the inaccuracies based upon certain assertions that it already "knows" to be true. OTOH, it would be quite frightening to watch it browse the newsgroups...

Engineer: Cyc, please list the newsgroups that you evaluated last night...

Cyc: alt.binaries.pictures.erotica.unix

Life Imitates Asimov, thanks to Clarke? (3)

The Monster (227884) | more than 13 years ago | (#133296)

The article ends with:
"HAL killed the ['2001'] crew because it had been told not to lie to them, but also to lie to them about the mission," he observes. "No one ever told HAL that killing is worse than lying. But we've told Cyc."
Could it be that they've told it:
  1. A robot may not injure a human being, or, through inaction, allow a human being to come to harm.
  2. A robot must obey the orders given it by human beings except where such orders would conflict with the First Law.
  3. A robot must protect its own existence as long as such protection does not conflict with the First or Second Law.
I'm not sure someone with $50M invested is going to put 2. and 3. in that order, though

Re:What a foolish piece of Babel (3)

r101 (233508) | more than 13 years ago | (#133299)

Until Genesis is retconned to include God breathing life into a lump of sand, we won't see Artificial Intelligence.

I'll file a bug report with the Vatican.

TrulyOpenCyc? (2)

OblongPlatypus (233746) | more than 13 years ago | (#133300)

I notice that OpenCyc allows free access to its knowledge base, but that submitted additions would have to pass through a review board. Wouldn't it be great to have a truly open Cyc, which everyone could talk to and "teach"? It might very well end up a ConfusedCyc, but the result would interesting nonetheless.

But since OpenSyc only seems to contain about 10% of the full knowledge base, maybe a better goal would be to make the entire thing available for public access.

How would a public access Cyc be fincanced? I don't know, but I'd hate to see the AI equivalent to ad banners..

Like Disney paid Art Bell to run Atlantis stories? (2)

Shivetya (243324) | more than 13 years ago | (#133301)

Tell me it isn't so.

Notice how many "Atlantis" shows popped up recently...

I just never thought Art was in on the take. He seems so honest are truthful.

Biology? What's that got to do with AI? (3)

dasunt (249686) | more than 13 years ago | (#133305)

Sorry folks, but carbon-based biology has nothing to do with the future of intelligence. It might be nice to believe that a group of cells randomly firing electrical signals at one another can create a sentient being, but such thoughts are naive. Sure, each individual cell might be alive, but it doesn't mean that a group of dumb cells working together would make an intelligent being. Its like believing that since one rock is dumb, a mountain would be bright.

Unfortunately, some people look at the behavior of h. sapiens and shout "intelligence!" Sure, it may appear that h. sapiens is intelligent, but only with a short examination. A human may pass a Turing test, but even though the human proved that he or she is indistinguisable from an intelligent entity, it doesn't mean anything, because I feel that I can make up any arbritrary decision I like so I can declare that a being that is indistinguishable from a sentient entity is still not sentient. :)

Seriously though, the so called "intelligent" h. sapiens owes its "intelligence" to a group of electrical impulses and a few simple chemical reactions among the many millions of cells that makes up the creatures "brain". With a powerful computer, we could simulate the reaction of chemical/electrical impulses of h. sapiens, but no one 'cept an undergraduate would be foolish enough to call such a simulation "intelligent". It can be argued that h. sapiens runs mainly on instinct and conditioned responses, its very clear that humans seem uncapable of long-term thinking, a sign of intelligence, and are thus doomed to ruin their habitat through environmental neglect and ever more damaging wars.

So remember, humans aren't intelligent, they only think they are.

Re:Humans are unreasonable (1)

daniel_isaacs (249732) | more than 13 years ago | (#133307)

It's called Artificial Intelligence for a reason. It isn't to emulate the unreasonable, or to replicate Human thought. It's to be Super-Human; being able to Reason effeciently and consistantly. It doesn't need need to fall in Love.

Intelligence != Human

But has it... (1)

Fat Casper (260409) | more than 13 years ago | (#133313)

Deduced the existance of tapioca pudding?


Why is the sky blue?

Re:No problemo... (2)

Jigoku (262894) | more than 13 years ago | (#133317)

I like to carry my own personal off switch...it's called a Desert Eagle. That way if the built-in off switch fails, I have a back-up. Those machines aren't going to be taking over my town any time soon.

Actually, I don't have a gun. But, if robots start to be integrated into our lives, I will make sure to carry one. I don't trust machines. If and when we do have robots all over the place in sensitive locations (the home, gov't buildings, schools, etc.), then we will need to make sure that we can trust the lives of our family in the hands of some metallic bucket of bolts. We have to make sure that we have "open" robots as to prevent a backdoor being built into it. Think of how bad a backdoor in a security bot in charge of your family would be. I won't trust the machines unless I can actually see any code or whatever is used to make the things run.

A message to future robot companies: Either keep your robots "open" or I'm going to keep a gun pointed at it.

Re:Cyc? What's that got to do with AI? (1)

Popocatepetl (267000) | more than 13 years ago | (#133319)

That is a very good idea. Someone moderate up the post to which I am replying - it's a good point.

Come with me if you want to live (1)

thejake316 (308289) | more than 13 years ago | (#133342)

Get your ass to Mars. Hello cutie pie! I'm a cop, you idiot! Get on the abdominator while I shout slogans at you. Maybe you all are homosexuals, too? Maria, my mighty heart is breaking...

Re:No problemo... (1)

jafiwam (310805) | more than 13 years ago | (#133343)

Unreasonable...

... in which case there are a bunch of these things sitting around my building called "NT Server" and "Windows ME" and so on...

Of course, they aren't CREATIVE about being unreasonable, sticking to hard-lockups and blue screens and so on.

Re:Cyc? What's that got to do with AI? (1)

sharkticon (312992) | more than 13 years ago | (#133348)

Yeah, that sounds like a good idea. I meant to imply Cyc has no use as the basis for AI, not that it didn't have any use in the field at all :) And it's just the sort of thing expert systems need as well...

You miss the point (1)

sharkticon (312992) | more than 13 years ago | (#133349)

The point is that no matter how powerful computers get, they are still Turing machines. Whether they run at 1KHz or 10^100THz, they still operate on the same principles, and as such will never demonstrate true intelligence, ie. semantic understanding.

Yes, I know (1)

sharkticon (312992) | more than 13 years ago | (#133350)

Sorry if I didn't make the distinction clear enough, but I was talking about Strong AI -"consciousness" basically. I don't disagree that computers will be highly intelligent, I just don't believe that Turing machines will ever be conscious.

Cyc? What's that got to do with AI? (2)

sharkticon (312992) | more than 13 years ago | (#133352)

Sorry folks, but Cyc has nothing to do with the future of AI at all. It's just a big list of rules that might be nice for certain expert systems, but it will never produce anything intelligent, no matter what part of the company's PR you buy into.

The fact is that no modern computer, no matter how powerful it gets, will ever be capable of creating true AI. Sure, they may pass the Turing test, but so does Theo de Raadt, and I can simulate his responses with nothing more than a few rules and a large table of swear words!

Unfortunately, the Turing test is no indicator of strong AI at all, just a very good rules system. Searle proved years ago that no rules-based system (ie. a Turing machine) can ever be truly intelligent, no matter how much so it seems.

Sorry, but Cyc is just a nice toy and of no use in serious AI research.

Re:Category error (2)

sharkticon (312992) | more than 13 years ago | (#133353)

I think the point he was trying to make was that the Turing test is the only test we have for intelligence. How do I know that you, my girlfriend or anyone else is intelligent? The answer- I don't but I assume that you are because you act in a way that I consider intelligent.

No, you're right. But you and your girlfriend work in roughly the same way as me, whereas the computer works on totally different principles. Therefore, while I'm justified in assuming you're also intelligent, I'm going to give the computer a hell of a lot more structiny before making such claims.

What is the proof that the human barin does not operate by 'blind adherence to rules'?

We simply don't know enough yet to say for definite, but the fact that it has both a changing topology and is analog would indicate it doesn't work in the same manner as a Turing machine, as would the Chinese room experiment.

Not really (2)

sharkticon (312992) | more than 13 years ago | (#133354)

QM has shown us that space and time are digital on a small enough scale through duality theory and superstrings, although even this isn't certain - we really don't have a good enough grasp of the underlying mechanisms of superstring theory yet to be certain. But I don't recall energy or momentum having discreet eigenstates, so it's not truly digital.

But apart from that, these effects are on a small enough scale (10^-33 cm and 10^-43 sec) that they are for all intents and purposes irrelevent to structures like we're talking about here. Even for systems such as the hydrogen atom we can assume space and time are continuous - the brain is a somewhat grosser system than that, although quantum effects may have a role to play.

Basically unless we can perfectly model the brain at around the Planck scale then any question of discreteness is totally irrelevent and we can assume all processes are analog. And even if we could you're still forgetting the randomness inherent in quantum mechanics with respect to collapse of the wave function and the creation of virtual particles. So as a model, it would instantly diverge from reality anyway...

Category error (3)

sharkticon (312992) | more than 13 years ago | (#133359)

Searle proved no such thing as your assertion; he merely provided a series of thought experiments which force us to think about what intelligence might actually consist of.

And very good ones at that, which demonstrate the underlying principles of Turing machines, and show how they cannot produce semantic understanding, merely syntactical manipulation of data.

If it looks intelligent, and acts intelligent in all conceiveable circumstances, then we'll be forced to conclude that it is intelligent, even if we know what's going on under the hood.

Bzzzt! Wrong... the Turing test says nothing about whether something is intelligent, merely whether it can fool a person. There are already some pretty good pieces of software out there about this, and they'll get better in the next few years. But they won't be intelligent. Blind adherence to rules is not intelligence.

Are you suggesting that, should we one day discover the secrets of the emergent behaviour of the human brain (reducing it, therefore, to "a simple rules system"), that we will suddenly cease to be intelligent?

Now here's your category error. You are assuming that the brain is also a Turing machine and that by some miracle of "emergent behaviour" intelligence arises. But that's obviously not true, as Searle showed, because Turing machines cannot be intelligent!

Re:Category error (2)

tb3 (313150) | more than 13 years ago | (#133362)

If you believe a brain and its reactions can be simulated by a computer, why is that not sufficient for intelligence?

Well, I have a problem with the first part of your statement. Roger Penrose demonstrated, at least to my satisfaction, that it is impossible to simulate parts of the huamn mind with a computer. Check out The Emperor's New Mind [fatbrain.com] for the details. It's rather complex and involved (this guy hangs out with Stephen Hawking) but I managed to follow the argument and it made sense.

Any AI experts care to comment?

"What are we going to do tonight, Bill?"

Jeesh, not Cyc again (5)

BillyGoatThree (324006) | more than 13 years ago | (#133363)

I first read about Cyc in Discover Magazine back when I was a Junior in HS. I thought it was the coolest thing since frozen bread. Then I read up on the topic of AI.

I have no doubt that one day AI will come to pass. I mean that in the strongest possible terms--a piece of software will pass the rigorous Turing Test and will be agreed by all to be intelligent in exactly the same sense humans are.

I *DO* have doubts that Cyc will be at all related to this outcome. Think about it: When I say "Joe is intelligent" do I mean "Joe knows a lot of facts?" No. Do I mean "Joe is good at symbolic logic?" No. I mean "Joe pursues goals in a flexible, efficient and sophisticated manner. He has a toolbox of methods that is continually growing and recursive." Does this description apply to Cyc?

No. Lenat and friends created a bunch of "knowledge slots" that they have preceded to fill in with pre-digested facts. What do I mean by "pre-digested"? For instance, Cyc might come up with an analogy about atoms being like a little solar system with electron planets "orbiting" the nucleus sun. Great, but that analogy came about because of how the knowledge was entered. Put in "Planets Orbit Sun" and "Orbit mean revolve around" and "Electron revolves around Nucleus" and then ask "What is the relationship of Electron to Sun?"--the analogy just falls out with some symbolic manipulation. It would be a lot more impressive if Cyc made analogies based on data acquired like a human: full of noise, irrelevance and error based on self-generated observations.

Cyc is a highly connected and chock-full database with a flexible select language. As a product that's awesome. As a claim to AI it's pretty weak.
--

Cyc = YAES (1)

popeyethesailor (325796) | more than 13 years ago | (#133364)

Okay, so we have a machine which responds "intelligently" to queries.. But what does it do ?

Maybe it passes the Turing Test. But i want to see it as an "Entity", moving, responding, reacting to its environment, using its Knowledge-base. That would be passable AI in my book.

Possible?

I'm not so impressed.... (2)

GreyPoopon (411036) | more than 13 years ago | (#133369)

How is this an improvement? It took 17 years to teach the computer what it takes a human about 17 years to learn. Does this mean that we'll have to start "raising" computers now?

(Note that I'm joking and perfectly aware that the knowledge base can just be copied)

GreyPoopon
--

Re:Category error (2)

actiondan (445169) | more than 13 years ago | (#133370)

Bzzzt! Wrong... the Turing test says nothing about whether something is intelligent, merely whether it can fool a person. There are already some pretty good pieces of software out there about this, and they'll get better in the next few years. But they won't be intelligent. Blind adherence to rules is not intelligence.

I think the point he was trying to make was that the Turing test is the only test we have for intelligence. How do I know that you, my girlfriend or anyone else is intelligent? The answer- I don't but I assume that you are because you act in a way that I consider intelligent.

What is your test for intelligence?

Blind adherence to rules is not intelligence

What is the proof that the human barin does not operate by 'blind adherence to rules'?

Re:Depends what you mean by AI... (2)

actiondan (445169) | more than 13 years ago | (#133371)

The bit about the defense department and Cyc being a "nice toy" was joking - as indicated by the emoticon. :) I'm sure the defense department is interested in loads of cool stuff - I would be if I had that budget. :)

Re:Category error (2)

actiondan (445169) | more than 13 years ago | (#133372)

But you and your girlfriend work in roughly the same way as me, whereas the computer works on totally different principles. Therefore, while I'm justified in assuming you're also intelligent,



Or at least, to the best of your knowledge we work in the same way and so you assume we do :)



[on the difference between the human brain and a Turing machine] We simply don't know enough yet to say for definite, but the fact that it has both a changing topology and is analog would indicate it doesn't work in the same manner as a Turing machine, as would the Chinese room experiment.



One of the things i find most interesting about very large expert systems is the possibility of emergent behaviour as the complexity increases. A single human neuron on its own will not exhibit intelligence - its behaviour is deterministic. When combined with millions of others in the right conditions, intelligence emerges. I do not write off the idea that a sufficiently large expert system could begin to exhibit similar emergent behaviour.

I'm not convinced that they will by a long way but I'm keeping my mind open. Assuming that only certain routes will achieve the aims of AI research on the basis of our currently very limited knowledge will limit our chances of finding the right one.

Re:You miss the point (2)

actiondan (445169) | more than 13 years ago | (#133373)

The point is that no matter how powerful computers get, they are still Turing machines. Whether they run at 1KHz or 10^100THz, they still operate on the same principles, and as such will never demonstrate true intelligence, ie. semantic understanding.

Well explained

What about if I use my obscenely powerful computer to simulate with absolute perfection every atomic (and sub-atomic)interaction in a volume of space that happens to contain a person?

There are some good arguments against this 'simulated intelligence' thought experiment but if we could simulate the brain inside a Turing machine, it would be more difficult to draw a distinction between what our brain does and what a Turing machine does.

Depends what you mean by AI... (5)

actiondan (445169) | more than 13 years ago | (#133374)

The fact is that no modern computer, no matter how powerful it gets, will ever be capable of creating true AI.

What do you mean by "true AI"? Artificial Intelligence is defined differently by different people but one widely accepted definition is "The ability for a machine to perform tasks which are normally thought to require intelligence". Intelligence can also have a number of definitions but keys factors are generally the ability to acquire and use knowledge and the ability to reason.

Cyc is doing things that previously machines have not been able to do so it has a lot to do with the future of AI.

You are right to mention that rules based systems will not bring us Strong AI but you make the mistake of thinking that Strong AI == AI. Strong AI is not the only goal of AI research. Many AI researchers are, like the developers of Cyc, trying to create machines that can do things that have previously been the preserve of the human brain. Their work is just as valid as those striving for String AI and at the moment is having more impact on the world.

Sorry, but Cyc is just a nice toy and of no use in serious AI research.

I doubt the defense department would be so interested in Cyc if it were "just a nice toy". :)

It won't be Arnold Doing the Stopping... (1)

chemical55 (446280) | more than 13 years ago | (#133375)

...its gonna be the Turing Police. I'll take Neuromancer over T1&2 any day.

Re:It won't be Arnold Doing the Stopping... (1)

chemical55 (446280) | more than 13 years ago | (#133376)

Not so, well..er..um..if it was done in Jurrasic Park II surely same sex reproduction can be done with humans too. No wait, the Chinese invented cloning, and then build a wall to keep their secret in. Uh..cuz the Mongols wanted it. Yes, yes thats it. (Walks away) (Door opening and closing) (Car starts, peels out)

No need! (1)

Bottlemaster (449635) | more than 13 years ago | (#133377)

Genesis 2:7 says "And the Lord God formed man of the dust of the ground, and breathed into his nostrils the breath of life"
Get it right :P

It's not intelligent, but it's neat (1)

Bottlemaster (449635) | more than 13 years ago | (#133378)

Check out Daisy.. http://leedberg.com/glsoft/daisy/index.html
It's for Windows, unfortunately, and you'll want to start with a blank database.
Non-Intelligent Language Learners are a cool approach to simulating intelligence.. I made an IRC bot with it, and my friends and I have had some really interesting conversations Daisy.
It works fine until the database gets near 100kb in size

They're going about it all wrong! (2)

Bottlemaster (449635) | more than 13 years ago | (#133381)

I don't think it's possible for humans to create an intelligence on a computer... but we could have computers do it for us.
We could make a simulation, with a few accurately modeled simple organisms (it would be extremely difficult, but possible) able to go through the simulated world, reproduce, etc. We'd run it much faster than normal time; I don't think anybody else wants to wait a few billion years for it. Then we just let evolution take its course.. making another intelligence the same way it made us. We would probably have to plug things into the simulation to create an environment in which intelligence would evolve.

The only problem with this method, is we would be responsible for countless deaths, many of them artificial creatures with real intelligence.

AND I know not all of you believe that we evolved from another life form, so the alternative method would be to assume godlike powers, and do whatever gods do when they make intelligent beings.

Traveller Trillion Credit Squadron (1)

Krelboyne (451082) | more than 13 years ago | (#133383)

The article references an early 80s space-based war game called Trillion Credit Squadron. Sounds very cool. Unfortunately, it predates me.

I used to love playing VGA Planets when I was a junior high schooler as well as text-based war games. Anyone know of any currently running games on the Internet that involve space, intergalactic combat, ship building, etc, etc?

-----------------

Re:Cyc? What's that got to do with AI? (3)

Ubi_UK (451829) | more than 13 years ago | (#133384)

"The fact is that no modern computer, no matter how powerful it gets, will ever be capable of creating true AI."

I'b be carefull with reasoning like that. If Moore's law will keep on going we might very well have powerfull enough CPU time in 100 years. That's not tomorrow, but certainly not never

Re:bs (1)

Genoaschild (452944) | more than 13 years ago | (#133385)

I agree. They are approaching it the wrong way. Giving a computer a huge database of practical knowledge and it trying to figure out what solution is the best one is not AI. The only way I see AI coming of age is if we give a computer basic instructions(like instinct and self-maintenance) for the processes of input and output. From their, start the computer off with absolutely no knowledge and let it interpret the world and make assumptions from its sensors based on very simple logic from some sort of BIOS. Let it self-interpret this knowledge and let it apply the "interesting" aspects of it to future knowledge. This is how the human brain works and should be a logical step in getting computers to learn through experience, not from predetermined input.
----

Re:Cyc? What's that got to do with AI? (1)

lys1123 (461567) | more than 13 years ago | (#133400)

I think a real problem with strong AI is defining it. I know PEOPLE who could not pass the turing test. People who make me question the "Survival of the Fittest" ideas of Darwinism, yet I assume they have intelligence simply because they are people.

On the other side of the coin, a lot of people assume computers can't have anything resembling real intelligence because they are computers.

I think the question is basically moot. If we can get computers to a state where they can interact with us like a person and figure out what we want, so we don't have to read through a reference manual for two hours, then all of the research into AI was worthwhile.

Basically intelligence is an abstract concept and we will probably be arguing about whether a computer can truly be intelligent or not for many years to come.

echo Mhbqnrnes Stbjr | tr [a-y] [b-z]

Bogosity (1)

Cletus the yokel (462083) | more than 13 years ago | (#133403)

Does anybody remember that this is the guy that units of bogosity were named after? Check your jargon file [science.uva.nl] . Not at all to say that I think Cyc is bogus - I just wanted to be the first to bring it up.

Business Rules (1)

Cletus the yokel (462083) | more than 13 years ago | (#133404)

Anybody here familiar with the Business Rules Approach to application development? C.J Date wrot a pretty good primer on it called "What Not How".
Load More Comments
Slashdot Account

Need an Account?

Forgot your password?

Don't worry, we never post anything without your permission.

Submission Text Formatting Tips

We support a small subset of HTML, namely these tags:

  • b
  • i
  • p
  • br
  • a
  • ol
  • ul
  • li
  • dl
  • dt
  • dd
  • em
  • strong
  • tt
  • blockquote
  • div
  • quote
  • ecode

"ecode" can be used for code snippets, for example:

<ecode>    while(1) { do_something(); } </ecode>
Create a Slashdot Account

Loading...