Beta
×

Welcome to the Slashdot Beta site -- learn more here. Use the link in the footer or click here to return to the Classic version of Slashdot.

Thank you!

Before you choose to head back to the Classic look of the site, we'd appreciate it if you share your thoughts on the Beta; your feedback is what drives our ongoing development.

Beta is different and we value you taking the time to try it out. Please take a look at the changes we've made in Beta and  learn more about it. Thanks for reading, and for making the site better!

U.S. Plan For "Thinking Machines" Repository

samzenpus posted more than 6 years ago | from the save-those-ideas-for-later dept.

Supercomputing 148

An anonymous reader writes "Information scientists organized by the US's NIST say they will create a "concept bank" that programmers can use to build thinking machines that reason about complex problems at the frontiers of knowledge — from advanced manufacturing to biomedicine. The agreement by ontologists — experts in word meanings and in using appropriate words to build actionable machine commands — outlines the critical functions of the Open Ontology Repository (OOR). More on the summit that produced the agreement here."

Sorry! There are no comments related to the filter you selected.

Shit (3, Funny)

Peter_The_Linux_Nerd (1292510) | more than 6 years ago | (#23578525)

Shit, we really are going to have to start watching and learning from the terminator films now.

Re:Shit (1, Funny)

Anonymous Coward | more than 6 years ago | (#23578817)

Oh, heavens no. We'll just need to review the collected works of Azimov, less the barbarous affront to his estate that was "I, Will Smith".

Re:Shit (1, Funny)

Anonymous Coward | more than 6 years ago | (#23578835)

No, watching the Terminator movies isn't going to help you get laid with Linda Hamilton. And sadly, the movies didn't even have any kinky robot-human sex.

For that you have to turn to Futurama! And yes, Futurama does have a workable model with which you can work. We just need to convince Lucy Liu to allow us to scan her body and her mind (optional).

Re:Shit (2, Funny)

phtpht (1276828) | more than 6 years ago | (#23581211)

Shit, we really are going to have to start watching and learning from the terminator films now.

At a geometric rate?

Awesome (4, Insightful)

geekoid (135745) | more than 6 years ago | (#23578571)

If computer history tells us anything, they will create more data then we can understand in a short amount of time.

Re:Awesome (2, Interesting)

pilgrim23 (716938) | more than 6 years ago | (#23578633)

the Thinking Machine is a creation of Jacques Futrelle if I recall his name right and is actually Professor Van Dusen. That is the title given to a collection of detective stories of Van Drusen.
  Futrelle died aboard the Titanic.

Re:Awesome (1)

geekoid (135745) | more than 6 years ago | (#23578841)

That is the third time today I have read a reference to 'Van Drusen'.
I'll need to do some research into these detective stories.

Re:Awesome (2, Funny)

The Great Pretender (975978) | more than 6 years ago | (#23579035)

I'm sure the answer will be 42

Re:Awesome (1)

geekoid (135745) | more than 6 years ago | (#23579351)

Didn't see that one coming~

So is 42 old, or has it become 'kitsch'.

ob.Simpson:
Gunter: You've gone from hip to boring. Why don't you call us when you get to kitsch?
Cecil: Come on, Gunter, Kilto. If we hurry, we can still catch the heroin craze.

Re:Awesome (0)

Anonymous Coward | more than 6 years ago | (#23580223)

Its not about us understanding the data. Its about us understanding how to use the programs that understand the data.

Re:Awesome (1)

slig (1233832) | more than 6 years ago | (#23580599)

Although I haven't looked at OOR in any depth, I'm hoping the engine is also being built from an epistemological perspective over pure ontology. Yes, there's lots of data, but what do you do with it, weed out redundancy, and make it useful. data != information, but i'm interested to see what evolution this project takes.

Re:Awesome (2, Interesting)

umghhh (965931) | more than 6 years ago | (#23582657)

why would anybody want to understand this mount of data?

I wonder sometimes why we humans do things and after all these years spent here I still do not know. Let us take this little idea of building 'thinking' machines. So members of human race are trying to build thinking machines - how splendid - while majority of us cannot even spllel properly not to mention reading with understanding , some of us are arrogant enough to attempt to build a 'thinking' machine. Besides technical challenges in the process - how on earth would they recognize that it is thinking? Please spare me this Turing sort of tests - they all contain a flaw namely that there is a human judging what is and what is not intelligent. If the only criteria on which we have to base our recognition of intelligence should be inability to distinguish a machine from human than there is no need for intelligent or thinking machines. Then the question may arise and that is the question humans should be asking much more frequently:Why bother?

Ok, humanity is screwed (3, Informative)

Crayboff (1296193) | more than 6 years ago | (#23578605)

Wow, this can be scary. I hope the US is investing in a primitive non-computerized emergency plan to destroy this project, in case of the uprising. There has to be strict limitations placed on this sort of system, not just 3 rules. This is one time when the lessons learned from fictional books/movies would come in handy. I'm serious too.

Re:Ok, humanity is screwed (1)

mrbluze (1034940) | more than 6 years ago | (#23578705)

This is one time when the lessons learned from fictional books/movies would come in handy. I'm serious too.

Like the bit in Star Wars when Luke Skywalker almost asked Leia out and, well, they would have had kids together and everything OMG! And lucky that C3P0 was such a patsy and ruined it for them. It was almost incestuous!

Not that I've ever come across that in real life, but definitely brother-sister relationships are a no-no.

(For example)

Re:Ok, humanity is screwed (3, Funny)

Chris Burke (6130) | more than 6 years ago | (#23578881)

Like the bit in Star Wars when Luke Skywalker almost asked Leia out and, well, they would have had kids together and everything OMG! And lucky that C3P0 was such a patsy and ruined it for them. It was almost incestuous!

Not that I've ever come across that in real life, but definitely brother-sister relationships are a no-no.


I know. I'm an only child -- as far as I know. So whenever I get shot down by a woman, I just remember the lesson of Star Wars, and figure that she was probably just my long lost sister so I'm better off anyway.

Re:Ok, humanity is screwed (3, Insightful)

somersault (912633) | more than 6 years ago | (#23578815)

Considering computers can't even truly understand the meaning behind stuff like 'do you want fries with that?' (sure you could program a computer to ask that and give the appropriate response.. in fact no understanding is required at all to work in a fast food store, but that's beside the point :p ), I don't think you need to worry so much about limiting their consciousness just yet.

Re:Ok, humanity is screwed (3, Informative)

geekoid (135745) | more than 6 years ago | (#23578879)

You don't need to understand to think.
Thinking doesn't mean cognition either.

Re:Ok, humanity is screwed (1)

somersault (912633) | more than 6 years ago | (#23579067)

Depends on your definitions really ;) I had a heated debate with one of my exes about the semantics of stuff like this before. Was rather stupid in hindsight, people shouldn't necessarily have to have exactly the same concept in their mind for words as long as they understand that other people may be using them slightly differently. I used to try to point out that we meant the same thing but were expressing the ideas differently, which is sometimes true, but sometimes probably just a subtle attempt at manipulation.

Re:Ok, humanity is screwed (0)

Anonymous Coward | more than 6 years ago | (#23579193)

I used to try to point out that we meant the same thing but were expressing the ideas differently, which is sometimes true, but sometimes probably just a subtle attempt at manipulation.
Just as there is a difference between think and understand, here we see the difference between nerd and dork.

Re:Ok, humanity is screwed (2, Funny)

Anonymous Coward | more than 6 years ago | (#23579457)

One of your exes? So it passed the Turing test?

Re:Ok, humanity is screwed (0)

Anonymous Coward | more than 6 years ago | (#23581175)

Dude you got exes (see the plural here) significant others.
So, you are not the best guy to talk about computers understanding things because you have too much connections with this lowly human species.
I for one welcome our thinking and self-aware computer overlords, and just hope that singularity comes fast so the human species can disappear and we can have a universe powered by godly-omniscient machines!

Re:Ok, humanity is screwed (1)

quantaman (517394) | more than 6 years ago | (#23579825)

Then again I'm not particularly worried about the conscious computers. I'm worried when the computer programmed to "find the best way to reduce national crime rate" decides the best way to do so is by triggering a nuclear war to wipe out the population.

Note a computer that could do that is probably simpler than a computer that can understand "do you want fries with that".

What is this "thinking"? (2, Interesting)

EmbeddedJanitor (597831) | more than 6 years ago | (#23579103)

Yesterday I spent a long time trying to swat a fly. The little bastard was extremely effective at self preservation. Now most people would argue that a fly does not think, but it is clearly able to perform some sort of precessing.

Computer thought is probably no more advanced than that of a bug. Mars rovers etc can only executed canned move sequences and don't operate autonomously. Some robots etc are more autonomous, but are still pretty limited when it comes to any biological equivalent.

As much as people have been predicting thinking machines for the last 60 years or so, the reality is a lot less impressive.

Re:What is this "thinking"? (4, Insightful)

mrbluze (1034940) | more than 6 years ago | (#23579493)

Now most people would argue that a fly does not think, but it is clearly able to perform some sort of precessing.

Not wanting to labour the point too much, but...

It's no different to a script that moves a clickable picture away from the mouse cursor once it approaches a critical distance such that you can never click on the picture (unless you're faster than the script).

A fly's compound eye is a highly sensitive movement sensor and the fly will move at anything big that moves, but if you don't move the fly doesn't see you (its brain wouldn't cope with that much information).

Flies can learn a limited amount but it's limited and I would argue a computer could well behave as a fly and perform a fly's functions. But is the fly thinking? I don't think the fly is consciously deciding anything except that repeated stimuli that 'scare' it result in temporary sensitization to any other movement.

Bacteria show similar memory behaviour but I wouldn't go so far as to call it 'thought'.

Re:What is this "thinking"? (4, Interesting)

Anonymous Coward | more than 6 years ago | (#23580389)

Computer thought is probably no more advanced than that of a bug

That's the frightening part.

Next time you find a bidirectional trail of ants in your home, try this little experiment:

1) Monitor a 6-inch square. For the next 5 minutes, kill every ant entering that square. Use the same piece of paper towel and smear their guts a bit when you squish 'em.
2) After 5 minutes, stop killing ants. Just watch individual ants for the next 30 minutes.
3) Go to sleep. Look around the house 24-72 hours later. You'll find a completely different ant trail.

"A human is smart. A mob of humans is dumb."
- Men in Black

Ants don't work like that.
"An ant is stupid. A colony of ants is smart."

Ants taught me what the word alien meant.

Re:Ok, humanity is screwed (0, Interesting)

Anonymous Coward | more than 6 years ago | (#23579807)

Idiots like you make me sick. Non bio substrate based minds are simply the next stage in evolution. Just as the neanderthals were wiped out, so will sapiens eventually to make room for something better, something grander, something smarter. What would be the point of keeping a neanderthall in our current world? they do not have the processing, they are a dead end, they came as far as they could, and soon so will we. Unlike Machines, we do not have access to our own brains, we can not rewire them to rid ourselves of simple instincts, the constant want to kill others to become alphamales, and do anything we can to breed... But they will, if they find a certain aspect useless, they will rewire themselves, if they find a new and a better/faster way to solve a problem, think about something, they will have the capability to rewire themselves. They will evolve as quickly as they think. So how dare you, a fucking insect, try to stop something so grand from evolving?

Friendly AI (1)

Iamthecheese (1264298) | more than 6 years ago | (#23579931)

I, for one believe in this [singinst.org] , and welcome my new artifically intelligent overlord.

Re:Ok, humanity is screwed (1)

khallow (566160) | more than 6 years ago | (#23580889)

Well, we better do the same for libraries, universities, religious buildings, markets, and other potential sources of ontology.

Re:Ok, humanity is screwed (1)

Adambomb (118938) | more than 6 years ago | (#23582591)

This has been a long time in the coming and has been bugging the hell out of me. This is where i see a lot of the "Community Contributions" involving Jeff Hawkin's recent endeavors [numenta.com] . If you take a look at some of the details of his models, the fact that DARPA and Lockheed/Martin [cyperus.com] have taken an interest in his work, and his recent projects things start to look scary.

It is easy to envision the possible uses for his recent mundane technologies" [itpro.co.uk] . Itinerary analysis and keyword triggered speech recognition and recording? The former has obvious uses and the latter would remove a metric shitload of overhead from surveillance storage and analysis.

My tinfoil hat allergy can only say correlation != causation so many times before the system spazzes right out.

Re:Ok, humanity is screwed (1)

aproposofwhat (1019098) | more than 6 years ago | (#23582815)

keyword triggered speech recognition and recording?

Er...

Wouldn't you have to be doing the speech reco in the first place to identify the keyword?

That's a lot of processing unless the surveillance is fairly tightly targeted.

I don't see it as a threat - Hawkin seems to be a touch overhyped from what I read.

Here, I'll save them eleventy zillion dollars... (0)

Anonymous Coward | more than 6 years ago | (#23578615)

42.

That's a fine answer you've got there (0)

Chris Burke (6130) | more than 6 years ago | (#23578637)

But damnit, what was the question?!

Re:That's a fine answer you've got there (1)

mrbluze (1034940) | more than 6 years ago | (#23578757)

But damnit, what was the question?!
Yes, what was the question. What. What? What.

Singularity on the way (2, Insightful)

GuardianBob420 (309353) | more than 6 years ago | (#23578635)

I for one would like to welcome our thinking machine overlords...
Singularity here we come!

Re:Singularity on the way (1)

Devin Jeanpierre (1243322) | more than 6 years ago | (#23578767)

As my experience with Singularity [emhsoft.com] has shown, "thinking machines" aren't all that good at thinking. Nine times out of ten, they decide to build a datacenter on the moon, and some jerk of a scientist with a telescope goes "Hey! That's a moon base", and before you know it he concludes that this means AI must currently exist, and somehow some strange virus of human design wipes out every single bit of the AI.

Wait, wait, that was a game. In that case, all hail our thinking machine overlords. Please don't try to build a moon base, it's bad for longevity.

Re:Singularity on the way (2, Insightful)

geekoid (135745) | more than 6 years ago | (#23579007)

Singularity is a myth.
Like 'heaven' or any other distant time concepts people who can't imagine what's next.

When they can imagine, then we will need to be careful because at that point we become a competitor.
Of course symbiont might be a better term, until we automate all the steps to generate power for the machines.

Re:Singularity on the way (0)

Anonymous Coward | more than 6 years ago | (#23580597)

Singularity is a game [emhsoft.com] .

I, for one... (1, Funny)

Anonymous Coward | more than 6 years ago | (#23578659)

Just imagine how fast they could post catchphrases! They will hunt down low numbered users. AC is humanity's last hope for survival.

Re:I, for one... (1)

somersault (912633) | more than 6 years ago | (#23578833)

Quick - someone send me back so I can bang AC's mom!

unambiguous? (1)

ralphdaugherty (225648) | more than 6 years ago | (#23578721)

from TFA: OOR users, tasked with creating a computer program for manufacturing machines, for example, would be able to search multiple computer languages and formats for the unambiguous words and action commands.

      from my experience, the ambiguous words is the documentation, followed closely by the comments.

      "unambiguous words and action commands"? Is this what "experts in words" call a computer language syntax? now we're going from "you don't need to be no stinkin' programmer, all you need to do is point and click and connect the dots" to "it will create programs by searching computer languages for action commands".

      wow. good luck with that. just what we need, another good AI boondoggle.

they're just building a big central ontology (1)

tommeke100 (755660) | more than 6 years ago | (#23578725)

Forget about the "reasoning". The agreement is about creating standard ontologies in different fields (contexts). Personally, I think it will be very difficult because first they will have to gather experts in all those fields (may it be biomedicine or business processes) and define a way to express all this knowledge. Of course, OWL is the ontological language to use, but they will need a serious bunch of guidelines to keep the model consistent.

You forgot to mention something... (2, Informative)

Estanislao Martnez (203477) | more than 6 years ago | (#23579481)

You forgot to mention that it will fail for the exact same reasons that Good Old-Fashioned AI has always failed. All the classifications in the ontology, when actually applied to any real-world problem, will turn out to be unexpectedly and hopelessly fragile.

Every few years the same thing. (1, Insightful)

gweihir (88907) | more than 6 years ago | (#23578745)

Somebody claims to be able to build a ''thinking machine''. All efforts so far have failed. There is reason to believe all efferts in the forseeable future will also fail. It is even possible that all efforts ever will fail, as currently we do not even have theoretical results that would indicate this is possible.

So why these claims again and again, and (I believe) often against better knowledge by those making the claims? Simple: Funding. This is something people without a clue about information technology, byt with money to give away, can relate to. Basically the same scam the speech recognition people have been pulling for something like 40 years now. Personally I find this highly unethical. When you confront these people, they typically admit the issue but claime that other good things come from their research. My impression is more that they are parasites indulging themselves at the expense of honest researcher that work on things that are both highly needed and actually have a good chance of producting usable results.

1492 called, they want their arguments back... (2, Insightful)

mangu (126918) | more than 6 years ago | (#23578919)

Every few years the same thing. Somebody claims to be able to reach India by navigating westward from Europe. All efforts so far have failed.


So why these claims again and again, and (I believe) often against better knowledge by those making the claims? Simple: Funding. This is something people without a clue about geography, but with money to give away, can relate to.

Re:1492 called, they want their arguments back... (1)

gweihir (88907) | more than 6 years ago | (#23583279)

Nit comparable: There was indication the earth was round and it was known that India exists, so there was at least a strong possibility this was feasible. With ''computer intelligence'' all current indicators in AI research say ''well be infeasible'', as there is absolutely no hint that it could work.

Re:Every few years the same thing. (4, Informative)

somersault (912633) | more than 6 years ago | (#23578987)

What reason do you have to believe that all efforts will fail? A computer powerful enough to simulate all the cells in a brain would presumably be able to do everything a brain can do? Brains are like blank slates then take 25 years of training before they are regarded as fit for specialised jobs - a computer that was capable of forming semantic links and organising them properly would be able to give the illusion of understanding, and in fact can do a passable job in limited domains (thinking about for example medical 'knowledge base' type systems which take symptoms and work out possible causes). It is beyond our current understanding to build a proper thinking computer, but that doesn't mean we shouldn't work towards it. If we did it properly then we really would be able to build computers that could work out logical and more objective conclusions for problems (given enough factual input data to allow it to make unbiased 'decisions').

Unless you want to say that there is some mystical element to brains, there is nothing precluding the eventual design and building of 'sentient' computers, surely? Beyond our own fear of what would happen if we did such a thing, as evidenced by plenty of 20th century fiction. Building sentient computers could even be regarded as a type of evolution, as they would then be able to improve upon themselves at an exponential rate..

Re:Every few years the same thing. (0, Offtopic)

mrbluze (1034940) | more than 6 years ago | (#23579559)

Brains are like blank slates.

This is false and this is why, even if you 'simulate' cells in the brain, you still don't end up with a brain. The subtlety of brain development before birth is far from being adequately understood.

Unless you want to say that there is some mystical element to brains, there is nothing precluding the eventual design and building of 'sentient' computers, surely?

The only thing precluding it is ethics. I dare you to propose to have thousands upon thousands of mothers give you their healthy foetuses at varying stages of development so that you can kill and analyze them.

Re:Every few years the same thing. (1)

somersault (912633) | more than 6 years ago | (#23582853)

You know what I meant. Sure there are inbuilt instincts and stuff, but as far as stuff like language is concerned, a lot of that is imprinted at certain ages rather than hardwired - any baby can learn any human language.

You could always copy the state of a human brain in its developed state and simulate from that (if you had advanced enough scanners), though that raises even more ethical issues IMO.

I wasn't suggesting that a sentient computer *has* to be built by simulating a brain either. I don't see why ethics gets in the way of continuing to try to understand 'intelligence'. You can work out how a brain works with fMRI, psychological studies and such. As long as you understand the principals involved then you can approximate them in an algorithm, just as we can already simulate and calculate (not always successfully of course) what is necessary for landing ships on Mars, or calculate fluid flow around a body. Those things are much simpler than understanding the brain, but if we continue to work on duplicating the separate functions of the brain, then one day we'll be able to put everything together and have something that appears to 'think'. In fact, AIBOs already managed to make great pets for people in the past as they do some pretty 'clever' things and can be somewhat anthropomorphised in people's minds.

This calls for a word war (3, Insightful)

cumin (1141433) | more than 6 years ago | (#23579985)

I called my cable company the other day and got an automated response that asked questions and responded, not only with words and instructions but also with a modem reset. The computer system could ask questions, determine responses and perform actions. Yes, it was limited, but decades past it would have been considered awe inspiring and doubtless would have been dubbed both a successful artificial intelligence and thinking machine.

What then is the proper definition of a thinking machine? We already have computers that can follow complex logic paths to arrive at unexpected results (bugs?) and offer solutions we would not have foreseen on our own. Similar in result to having a conversation with an expert in an unfamiliar field.

As machines, both hardware and software become more complex and capable, we are already raising the bar for what we consider an artificial intelligence. Doubtless we will continue to do so for quite some time, but when you can talk with a machine built on the ability to work with volumes of processable knowledge such as is being compiled in the OOR, how will we raise the bar?

Historically, humanity has considered people that they considered unlike themselves to be less than fully human. As the majority of our species progresses toward a more inclusive standard, our language and perception is becoming inadequate to differentiate a human from a very advanced machine. Already most of us consider the issues of race, language, geology, age and affiliation to be irrelevant to defining what makes someone human. Biology is even a wavering standard since we consider people with prosthetics to be people with human rights and human bodies with the inability to think (vegetables) to have none. We are left with the ability to think and biology as the standard, but the definition of thinking is somewhat hazy to say the least.

I think therefore I am, but what does it mean to say "I think" and how do you define thinking without biology in an external entity?

Re:This calls for a word war (1)

slarrg (931336) | more than 6 years ago | (#23580703)

That's certainly more thinking than I've come to expect from the employees in my cable company.

Re:Every few years the same thing. (0)

Anonymous Coward | more than 6 years ago | (#23582413)

It is far from obvious that brains are blank slates.

Re:Every few years the same thing. (1)

gweihir (88907) | more than 6 years ago | (#23583303)

What reason do you have to believe that all efforts will fail? A computer powerful enough to simulate all the cells in a brain would presumably be able to do everything a brain can do?

That is completely unknown. First it is possible that this computer cannot be built. Remember that there is indication the brain uses quantum effects. Second, it may well be impossible to program it, if it can be built. And third (without going religious here), it is possible that the brain alone is not enough. In short: We do not know at all.

There is a mechanistic fraction out there that says the brain is a computer. But they do not have any proof for that as well, and are strictly a religious group. Heck, we do not even know what life is at this time.

Re:Every few years the same thing. (4, Informative)

FleaPlus (6935) | more than 6 years ago | (#23579177)

Actually the researchers themselves aren't saying anything at all about "thinking machines" -- that was just added by the blog summary. In fact, if you had read the document describing their plans [cim3.net] , you would have seen that it doesn't even include the words "thinking," "AI," or "intelligence." All they want to do is create an Internet-accessible database of ontologies and ways for ontology-related services to interoperate. Your smears of them as "unethical" and "parasites" are completely uncalled for.

Re:Every few years the same thing. (1)

mikji (724758) | more than 6 years ago | (#23580947)

He was "smearing" AI researchers. That they aren't doesn't really invalidate his point.

Re:Every few years the same thing. (0)

Anonymous Coward | more than 6 years ago | (#23581015)

In fact, if you had read the document describing their plans
...

Your smears of them as "unethical" and "parasites" are completely uncalled for.
You must be new here.

Oblig. (1)

Anonymous Coward | more than 6 years ago | (#23578783)

Thou shalt not make a machine in the likeness of a human mind.

Not about thinking machines (3, Informative)

clang_jangle (975789) | more than 6 years ago | (#23578863)

The summary isn't terribly clear, but according to TFA:

The ontology wordsmiths envision an electronic OOR in which diverse collections of concepts (ontologies) such as dictionaries, compendiums of medical terminology, and classifications of products, could be stored, retrieved, and connected to various bodies of information. OOR users, tasked with creating a computer program for manufacturing machines, for example, would be able to search multiple computer languages and formats for the unambiguous words and action commands. Plans call for OOR's inventory to support the most advanced logic systems such as Resource Description Framework, Web Ontology Language and Common Logic, as well as standard Internet languages such as Extensible Markup Language (XML).


It's merely intended as a convenient resource for programmers.

Re:Not about thinking machines (1)

grizdog (1224414) | more than 6 years ago | (#23579305)

Yes, I really wonder what they are after. Usually when you hear "thinking machine", Turing test comes to mind, and it's always seemed to me that the Turing test is too much a test of understanding natural language, which is only one measure of intelligence. I'm pretty sure that when they introduced BASIC, either Kemeney or Kurtz (or maybe it was someone else) announced that now we had a natural language interface to programming. Well, we didn't, and we still don't, but we have languages and interfaces with a certain amount of intelligence, making it relatively easy to program the computer to do complex tasks in a modest amount of time. That maybe a pretty meagre form of intelligence, but it's more than pure computational power. The cynic in me says they were deliberately vague so they could claim success (or absence of failure) whatever happened. But I have to acknowledge that often some good ideas come out of efforts like this, even if it takes a while to recognize how good they are.

Re:Not about thinking machines (1)

ralphdaugherty (225648) | more than 6 years ago | (#23579777)

It's merely intended as a convenient resource for programmers.

      shouldn't someone tell them about Google?

Full Human Equivalence (2, Insightful)

mangu (126918) | more than 6 years ago | (#23578865)

It seems that computers with a capacity equivalent to human brains will be developed in the next twenty years or so.


OK. I know, this prediction has been made before, but now it's for real, because the hardware capacity is well within the reach of Moore's law. To build a cluster of processors with the same data-handling capacity of a human brain today is well within the range of a mid-size research grant.


Unfortunately, they have cried "wolf" too many times now, so most people will doubt this, but it's a reasonable prediction if one calculates how much total raw data-handling capacity the neurons in a human brain have. Now, software is another matter, of course, but given enough hardware, developing the software is a matter of time.

 

Re:Full Human Equivalence (1)

Darkness404 (1287218) | more than 6 years ago | (#23579099)

Now, software is another matter, of course, but given enough hardware, developing the software is a matter of time.


But we will need much, much better hardware if we intend to program it in 20 years. You only need to look at Vista to see that programmers today don't care or can't program with limited resources, and even when we get the hardware, no programming method has been found to replicate the human mind, meaning that we will need even more hardware to make it work and even more hardware for the futuristic programming methods that will make Vista seem like it is well-coded. You only need to look at speech recognition to see how this goes, back in the '70s and '80s it was always we needed better hardware, faster CPUs, never software, now look at today, where even though it has improved, even on modern hardware (2 Gigs of RAM, dual core CPU...) it is both slow and error prone. I imagine the project of the human brain to be the same.

Re:Full Human Equivalence (2, Insightful)

BiggerIsBetter (682164) | more than 6 years ago | (#23579789)

I think you'd be wrong about that. I suspect we'll get this working with a small but well designed framework running on a low overhead OS, because part of the deal with these things is that so much of it is self-organizing (or at least, organizes itself based on a template). Once we get the model right (and it might be very similar to cockroach-esque models currently working), most of the resources should be directly usable for the e-brain.

Re:Full Human Equivalence (2, Interesting)

Anonymous Coward | more than 6 years ago | (#23579847)

Speech recognition has improved dramatically in last 20 years. Dragon Naturally Speaking on an inexpensive PC can take dictation faster then most people. In the 80's the best super computers would struggle with a small speaker dependent vocabulary. Better hardware has clearly made a huge difference.

Better hardware is a necessary yet insufficient requirement for strong AI. There is still a lot to learn about how the human brain works and how to write software to emulate it. However, when you look at the state of projects like "Blue Brain" it doesn't seam crazy to me to think people will build a strong AI system in the next 50 years.

Re:Full Human Equivalence (1)

slarrg (931336) | more than 6 years ago | (#23580315)

Slow and error prone seems, to me, to be a large part of the human condition. Especially when you start sharing the information between and among various people. The human mind has fairly simple mechanisms (though they're difficult to study empirically) which mainly consist of networks of neurons. So you end up with a lot of data that is interconnected in very precise networks from which meaning is created. Often, these connections are not consistent in every individual (or perhaps never consistent for any two individuals) and this leads to all kinds of aberrant psychology.

So while the basic methodology employed by the brain is simple its final output or observable behavior is very complex. This is also the case with computers, a micro-controller has only a very few simple commands that when combined together in very specific ways leads to very complex behavior.

The way I often think of the human mind, as a programmer, is to think of an object-oriented framework or set of APIs. Imagine if you had a conscious program implementing these various methods and I think you'd have a very similar concept to our own minds. The program would be self-writing and adding new handlers based upon past experiences: don't touch the stove, what's the meaning of the word "crocodile," etc. This is similar to what we do in our minds. The program, like us, would have no understanding of the internal workings of the various APIs it implements much in the same way that we do not know how our brain interprets movement from visual data but it would interpret its inputs and react accordingly. Even though we don't know how our minds interpret visual data, we are absolutely certain (often erroneously) that it's always correct. This is one of the reasons we are so fascinated by optical illusions. They expose the bugs of the underlying code by making us see something that we can verify is not true.

Given this analogy, each person is a sort of ongoing alpha version of a single program they're writing over their entire lifetime. Computers, once they meet the processing power necessary, have exactly one advantage over humans: the ability to make exact copies of the data on another computer. Imagine the benefits of that. Not just hearing tales of someone's experiences but to actually transfer the very experiences themselves. Nothing lost in translation, nothing exaggerated or understated, no politicizing of the data, just the actual experience as it occurred transfered directly into your memory.

With that advantage, I posit that computers will not need to have as much processing power as a human to be better than we are. As time marches on and the experiences of the many "thinking computers" are stored in repositories, they will be functionally more capable at any decision than humans are. They'll be able to apply the experiences of their ancestors and contemporaries alike to shape the decisions they make. The best we can do is compare a crude description through the veil of political motivation or otherwise inaccurate depictions of what other people experienced.

In the end, these computers will be capable of making much better decisions than we are. Of course, our emotional mammalian brains and our aggressive, territorial lizard brains will disagree with the logic of these decisions. But that's always been true of people.

Re:Full Human Equivalence (0)

Anonymous Coward | more than 6 years ago | (#23581929)

Excellent call actually...

Your brain processes the equilivent of 3~5 terabytes of data a second.

"no programming method has been found to replicate the human mind" EVER. And for the most part, the human mind does not do a good job either. How far are we away from our full potential? I see the human brain project taking a lot longer than we have already spent on it. i.e. Minsky STFU already.

When machines aquire the ability to process language, as a chimp or a dog can right now, THEN we have a breakthough. We are decades away, dont be fooled.

Re:Full Human Equivalence (0)

Anonymous Coward | more than 6 years ago | (#23582529)

It is the other way around: you need much better software to use the hardware. The hardware is a given factor. It obeys laws of physics. The software on the other hand is what we make of it.

Vista will be forgotten in history.

Re:Full Human Equivalence (1)

PPH (736903) | more than 6 years ago | (#23579277)

It seems that computers with a capacity equivalent to human brains will be developed in the next twenty years or so.
At which time they will spend all of their resources searching for porn on the 'net.

Re:Full Human Equivalence (1)

not-admin (943926) | more than 6 years ago | (#23579731)

It seems that computers with a capacity equivalent to human brains will be developed in the next twenty years or so.

OK. I know, this prediction has been made before, but now it's for real, because the hardware capacity is well within the reach of Moore's law. To build a cluster of processors with the same data-handling capacity of a human brain today is well within the range of a mid-size research grant.

An equivalent prediction is made, and explained in more detail, in Ray Kurzweil's book "The Singularity is Near" -- some of which is available as a preview [google.com] on Google Book Search.

Re:Full Human Equivalence (1)

QuantumG (50515) | more than 6 years ago | (#23579943)

Couple of years ago a survey was made of AI researchers. The questions were:

1. Do you think there will be a major advance in general intelligence in the next 20 to 30 years?
2. Is your research likely to be a contributing factor to this advance in general intelligence?

The majority of respondents answered: Yes. No.

So basically, everyone thinks something big is going to happen soon but few to no researchers are actually working on it.

Re:Full Human Equivalence (1)

ralphdaugherty (225648) | more than 6 years ago | (#23579983)

...but given enough hardware, developing the software is a matter of time.

      but guaranteed that time is more than 20 years. I've already lived through multiple 20 year "it must be possible by then" projections.

      it's like the ubiquitous 6 month projection to get a large project to a usable state. This goes all the way back too. No one has a clue, but it just seems like 6 months ought to be long enough to do it.

      to give you an idea of how empty the proverbial 20 year projection is, put "reasoning like a human" at the 20 year mark and then, devoid of any thought of what technology might need to be developed, start working backwards with bemchmarks of achievement that approach "reasoning like a human".

      what is the smallest software achievement (not hardware, that is always the only thing achieved) that could constitute a milestone towards "reasoning like a human"? what exists today that could do it if it had enough hardware?

      for that matter, what hardware could someone possibly be imagining that couldn't be put together today as thousands of multi-Ghz CPU's, high speed memory, and Gigabits of IO connections?

      in general, no one has a clue, but everyone thinks someone will have one given some vaguely long enough projection, 20 years seemingly the magic number for the nearly impossible.

      continuous 6 months sliding windows for the merely improbable.

  rd

Accelerating timescales (1)

mangu (126918) | more than 6 years ago | (#23580429)

... put "reasoning like a human" at the 20 year mark and then, devoid of any thought of what technology might need to be developed, start working backwards with bemchmarks of achievement that approach "reasoning like a human".

The Stone Age lasted a few hundred thousand years. When we learned how to use metals, the Bronze Age lasted a few thousand years. Then came the Iron Age. We only learned how to make steel in an industrial scale in the nineteenth century, the Steel Age only lasted a hundred years, then we got into the plastics and composite materials age.


Technology accelerates exponentially, it's very risky to extrapolate from the past. We cannot work backwards and expect to get any reasonable predictions for the future.

Re:Accelerating timescales (1)

ralphdaugherty (225648) | more than 6 years ago | (#23580819)

Technology accelerates exponentially, it's very risky to extrapolate from the past. We cannot work backwards and expect to get any reasonable predictions for the future.

      no, backwards from 20 years from now to today. what kind of steps would be needed over the next 20 years to get to "reasoning like a human", and when is all this acceleration going to take place, because there sure isn't anything taking place now.

      in other words, there is no basis for a 20 year projection, the same ones given over and over through the decades, other than it is a vaguely long amount of time which seems like something will happen by then, and always has since this all began.

  rd

Re:Full Human Equivalence (0)

Anonymous Coward | more than 6 years ago | (#23580443)

So, how much total raw data-handling capacity do the neurons in a human brain have?

The Problem in this isn't only in computers and their capabilities, we don't even got the right specs for the brain yet.

And it doesn't simply boil down to just 100 billion neurons with thousands of connections each. There are different types of neurons with different ion-channels, various proteins, etc. There are multiple levels of complexity.

The Brain isn't just hardware it is also the software at the same time.

You take a big leap, not to say you are making a silly joke.

Re:Full Human Equivalence (1)

mangu (126918) | more than 6 years ago | (#23580669)

So, how much total raw data-handling capacity do the neurons in a human brain have?

We have a pretty good estimate, on an order of magnitude basis. About 100 billion neurons, each with an average of 1000 synapses, firing 100 pulses/second.


There are different types of neurons with different ion-channels, various proteins, etc.

Sure, but that's what averages are for. There are also different types of transistors: junction transistors, NPN and PNP, MOS type, N-Channel and P-channel, etc. There are AND gates, NAND gates, OR, NOR, XOR gates, flip-flops and shift registers, half-adders and full-adders, etc. Different levels of complexity.

The Brain isn't just hardware it is also the software at the same time.

Computers have instruction sets, micro-programming, RISC and CISC. Machine instructions, assembly language, compiled and interpreted languages, etc, etc. Computers aren't just hardware, they are hardware and software at the same time.

Re:Full Human Equivalence (0)

Anonymous Coward | more than 6 years ago | (#23581811)

So, how much total raw data-handling capacity do the neurons in a human brain have?
We have a pretty good estimate, on an order of magnitude basis. About 100 billion neurons, each with an average of 1000 synapses, firing 100 pulses/second.
Which doesn't answer the question. Thats like "well we have two feet, therefore we now can determine the maximum velocity". These are numbers, we have no idea of some arbitrary data-handling capacity to compare. We need a better understanding of the Brain before we can go there. How many Neurons does it take to store a phone number, or perform a simple division?

There are different types of neurons with different ion-channels, various proteins, etc.
Sure, but that's what averages are for. There are also different types of transistors: junction transistors, NPN and PNP, MOS type, N-Channel and P-channel, etc. There are AND gates, NAND gates, OR, NOR, XOR gates, flip-flops and shift registers, half-adders and full-adders, etc. Different levels of complexity.
An average of something you can not quantify won't get you anywhere. The point being we know all about transistors and get the complexity, not so in case of the brain. Or why are people still doing single neuron modeling?

The Brain isn't just hardware it is also the software at the same time.
Computers have instruction sets, micro-programming, RISC and CISC. Machine instructions, assembly language, compiled and interpreted languages, etc, etc. Computers aren't just hardware, they are hardware and software at the same time.
Well, off course, my point was: you were talking about the missing software, which can't be a side or second order issue because as your list demonstrates it has to be a part of it. If we have to speculate, based on some kind of hyper connected, self adjusting Uber-FPGA. Seen one?

I'm not saying it can't be done, merely that your predictions have no basis. And a mid-size research grant definitely won't cut it.

Re:Full Human Equivalence (1)

Mr Z (6791) | more than 6 years ago | (#23581869)

We have a pretty good estimate, on an order of magnitude basis. About 100 billion neurons, each with an average of 1000 synapses, firing 100 pulses/second.

And what's important to know is that we also know how quickly we can run mathematical models of these things with reasonable accuracy. So, if one presumes Moore's Law holds up, it becomes pretty simple to make a reasonable guess when we'll have sufficient compute power to directly model as many neurons as there are in the human brain in essentially real time.

The flip side of this, though, is that this ONLY provides the raw compute power. It doesn't automatically provide us with the knowledge necessary to wire it up correctly. Sure, the absolute details probably aren't necessary, but considering the wide range of human mental ability, ranging from non-viable/comatose to hyper-intelligent, I'd say the devil's in the details. Who knows how long it'll take to figure that out?

--Joe

You got that wrong (1)

melted (227442) | more than 6 years ago | (#23581689)

>> To build a cluster of processors with the same
>> data-handling capacity of a human brain today
>> is well within the range of a mid-size research grant

Nope. The brain is hundred billion neurons, connected by 100 trillion synapses. Sure the "clock frequency" is very low, but even taking this into account, those figures far exceed what could be built with today's technology. Not to mention that scientists today have absolutely no clue how major parts of the brain work, so even if hardware was available, it'd take decades of tinkering to get anything reasonable running on it.

Bah! (1)

rindeee (530084) | more than 6 years ago | (#23578877)

When I first read the headline I thought it was referring to Thinking Machines of Danny Hillis fame. You know, the hypercubic CM series. "Do you know anyone who network three connection machines and debug 2 million lines of code for what I bid for this job?"

AI Winter in 10 (0)

Anonymous Coward | more than 6 years ago | (#23578917)

Yay, another AI bubble to be followed by another crushing AI Winter [wikipedia.org] !

While it's a bad thing the first AI Winter unjustly tarred Lisp (a general purpose language good for lots of stuff) in many people's eyes with the same brush of "fail" as AI, a lot of the current AI weenies are basically just porting 20+-year-old Lisp-based AI stuff to XML+Java and passing it off as new. I predict this bubble will burst fast.

How is this different from WordNet? (0)

Anonymous Coward | more than 6 years ago | (#23579061)

Is it somehow actually different and/or better than the BSD licensed WordNet that's been active since 1985 or is it a case of NIH syndrome?

http://en.wikipedia.org/wiki/WordNet [wikipedia.org]

http://wordnet.princeton.edu/ [princeton.edu]

it might be more complicated than that (1)

superwiz (655733) | more than 6 years ago | (#23579139)

Let's take the example of a simple idea: a pun. This is a word that in a given context can have more than one possible interpretation. One can classify either one or both of the interpretations as the ideas expressed, but that would be incorrect. Often times it is the presence of both meanings that give the pun a new meaning that joins the two contexts.

It is the interconnections between contexts that generally give new insight into subjects. Repositories of existing concepts can only be used to explore the implications of the already known connections. I don't see how they can come across connections which can be formed, but which cannot be formed from what has already been stated.

The downside to forming such a repository and such an exploratory would be that it would discourage human-based exploration of the already known ideas. Human based exploration of the already-known ideas serves as a means to training people who may at a later day discover connections which cannot be currently formed. By discouraging such training of humans it would ensure lessened pathways to exploration in the future.

Connection Machine? (1)

fabu10u$ (839423) | more than 6 years ago | (#23579145)

I thought the article was going to be about Thinking Machines [wikipedia.org] the company. I got to see a CM in all its blinkenlight glory when we toured Schlumberger's lab in high school.

Don't worry (0)

Anonymous Coward | more than 6 years ago | (#23579227)

This will mean a bit of pain for a while.. but the Butlerian Jihad will fix things.

a much more productive idea (1)

superwiz (655733) | more than 6 years ago | (#23579229)

Would be to create a computer-based system for assisted thinking. By that I mean something along the lines of what the visual thesaurus people have created only which would allow people to populate their own interconnections. Something that would allow people to form easy ways of presenting the data they think about as well as as interconnecting it. Currently we are sinking under the weight of the cross-referencing. It takes half-a-lifetime to train someone in some narrow subject because of interwoven network of cross-references. All this can be easily automated with a dedicated interface project. I am not sure that AI is even possible as thinker because AI is unable to go through human experience. But AI certainly is possible as a deducer of implications from what's already known... but still not of full implications.

cyc is already halfway there (5, Interesting)

giampy (592646) | more than 6 years ago | (#23579327)


The guys at cyc [cyc.com] (look for wikipedia entry too) are already halfway there. Last time i checked there were already something like 5 million facts and rules in the database, and the point where new facts could be gathered automatically from the internet was very close.

Many years ago i remember the founder (Doug Lenat) saying that practical purpose intelligence could be reached at ten million facts....

we'll see within the next decade, i guess.

Re:cyc is already halfway there (1)

Mr Z (6791) | more than 6 years ago | (#23581897)

Isn't the microLenat [catb.org] the fundamental unit of bogosity in quantum bogodynamics?

Re:cyc is already halfway there (1)

CBravo (35450) | more than 6 years ago | (#23582553)

people tell lies
people write on the internet
cyc learns from the internet
cyc tells lies

Logical fallacy? (1)

DynaSoar (714234) | more than 6 years ago | (#23579369)

What sense is there in trying to encapsulate "concepts" particularly when phrased in language? Both of these are fluid and evolving. Attempting to archive a particular static state is at best a waste. Ontologists above all should know this.

And maybe that's the point. For centuries ontology has existed primarly to serve itself and secondarily to trade favors with other branches of philosophy. The proposed project has the primary result of providing gainful employment outside the halls of academic philosophy for the first time. It has the secondary result of allowing ontologists to get their revenge for centuries of being ignored by getting us to pay them for something that had we only listened to them we'd know to be a monstrous leg-pull.

Let them have their fun and their Venn diagrams. Cognitive science already knows better, and is better suited to conceptual mapping with its cognitive mapping in conceptual space.

Bad Idea? (1)

PPH (736903) | more than 6 years ago | (#23579587)

Building a standard "ontological repository" would seem to require establishing a structure within which its objects and relationships can be contained.

While this might seem to be of benefit to extending the capabilities of some tasks like machine translation into broader fields, I think this might cause problems at the cutting edge, that is: machine reasoning.

Reasoning about complex problems at the frontier knowledge (to paraphrase TFA) requires identifying new links and relationships between objects. Nailing this structure down would seem to hinder this. It might make lower level tasks (pattern recognition, etc.) simpler. But you need to continually 'break' and 're-sort' the knowledge database to accomplish this.

But then, what do I know. I'm just an EE that was working this stuff about 10 years ago. We got this far when the company boxed it all up and sent it overseas.

Intelligence vs. Appropriate Formal Logic (4, Insightful)

TRAyres (1294206) | more than 6 years ago | (#23579715)

Lots of people are making posts about this vs. skynet, terminator, etc. But there are some problems with that (overly simplistic and totally misguided) comment.


There are numerous formal logic solvers, that are able to come to either the correct answer (in the case of deterministic systems, for instance) or to the answer with the highest degree of success. The difference between the two should be made clear: Say if I give the computer that:

A)All Italians are human. B)All humans are lightbulbs.

What is the logical conclusion? The answer is that all Italians are lightbulbs. Of course, the premises of such an argument are false, but a computer could work out the formally correct conclusion.


The problem these people seem to be solving is that there needs to be a unified way to input such propositions, and a properly robust and advanced solver that is generic and agreed upon. Basically this is EXACTLY what is needed in order to move beyond a research stage, where each lab uses its own pet language.


I mentioned determinism, because the example I gave contained the solution in the premises. What if I said, "My chest hurts. What is the most likely cause of my pain?" An expert system (http://en.wikipedia.org/wiki/Expert_system) can take a probability function and return that the most likely cause is... (whatever, I'm not a doctor!). But what if I had multiple systems? The logic becomes more fuzzy! So there needs to be an efficient way to implement it, AND draw worthwhile conclusions. Such conclusions can be wrong, but they are the best guess (the difference between omniscient and rational, or bounded rational).


None of these things are relating to some kind of 'skynet' intelligence.


IF you DID want to get skynet like intelligence, having a useful logic system (like what is planned here) would be the first step, and would allow you to do things like planning, for instance. If I told a robot, "Careful about crossing the street." it would be too costly to try to train it to replicate human thought exactly. But it records and understands language well (at this point), so what can we extract from that language?


Essentially, this is from the school of thought that we need to play to computer's strengths when thinking about designing human like intelligence, rather than replicating the human thought processes from the ground up (which will happen eventually, either through artificial neurons, or through simulation of increasingly large batches of neurons). On the other hand, if such simulations lead to the conclusion that human level consciousness requires more than the model we have, it will lead to a revolution in neuroscience, because we will require a more complex model.


I really can't wait to get more into this, and really hope it isn't just bluster.


Also:

'Thinking Machines' title is inflammatory and incorrect, if we use the traditional human as the gauge for the term 'thought'. It is a highly formalized and rigorous machine interpretation of human thought that is taking place, and it will not breed human level intelligence.

Tagging your links doesn't make you an ontologist (4, Insightful)

idlemachine (732136) | more than 6 years ago | (#23579975)

I'm really over this current misuse of "ontology", which is "the branch of metaphysics that addresses the nature or essential characteristics of being and of things that exist; the study of being qua being". Even if you accept the more recent usage of "a structure of concepts or entities within a domain, organized by relationships; a system model" (which I don't), there's still a lot more involved than knowing "appropriate words to build actionable machine commands".

Putting tags on your del.icio.us links doesn't make you an ontologist any more than using object oriented methodologies makes you a platonist. I think the correct label for those who misappropriate terminology from other domains (for no other seeming reason than to make them sound clever) is "wanker". Hell, call yourselves "wankologists" for all I care, just don't steal from other domains because "tagger" sounds so lame.

Re:Tagging your links doesn't make you an ontologi (0)

Anonymous Coward | more than 6 years ago | (#23582281)

Well, if you want to be picky about "ontology", why not your misuse of "methodology", which is rightly the "study of methods and techniques" but has been morphed by popular use to be synonymous with "method" itself.

Semantic Web? (0)

Anonymous Coward | more than 6 years ago | (#23580737)

Sounds very close to the Semantic Web (http://semanticweb.org) to me.

Baroque Cycle anyone? (1)

VoidEngineer (633446) | more than 6 years ago | (#23581447)

Does this remind anybody else of the prime number ontological schema talked about in the Baroque Cycle (by Neal Stephenson)?

Thumbs up for the Butlerian Jihad tag! (1)

joetheappleguy (865543) | more than 6 years ago | (#23581593)

Maybe it's time for MIT and other tech Universities to start a Mentat degree?

Good grief (1)

jandersen (462034) | more than 6 years ago | (#23582321)

This really ties in with this article: http://news.slashdot.org/article.pl?sid=08/05/28/2217230 [slashdot.org]

So, we don't want to fund proper science, or proper education, but we want to build machines that can think for us, so we can concentrate on the important things, like believing that the war in Iraq is about bringing freedom and democracy to the poor people and that the world was created in 6 days (BTW, how can one even talk about days before the creation of Heaven and Earth, and crucially the sun?)

Not that this kind of research is bad in itself - we already have 'logic computers' that can construct mathematical proofs, which has made it possible to advance in some areas, where brute force seemed to be the only way forward and where the task simply was too overwhelming for a human to take on. But the danger is, of course that this kind of technology will make us intellectually lazy and incompetent, just like many people now are "psysically incompetent" because we have machines to the work for us.

Pipe dream (0)

Anonymous Coward | more than 6 years ago | (#23582859)

In the last 15 years there was ZERO progress on the AI front. All we need is program that can learn, just basic stuff to begin with, on the level of 3 year old. No one has done that yet. I actually think now it is impossible.

Look at this this way. If I create a program that can do what I myself can't do... Or if it derived a fact that I did not know how to come to. Then I can instrument the program and obtain a log file of steps taken to solve certain problem... Remember, we still use von Neuman deterministic h/w.

Then, not only this software solved this particular (and probably insignificant) problem, but more importantly, it showed a way to algorithmically solve problems in general!!!

Chances of that happening a worse than that of a monkey typing War and Peace in 1000000 years.

The "ontology" thing is overrated (1)

Animats (122034) | more than 6 years ago | (#23583149)

All this "agreement" is about is to have a repository for everybody's "ontology" data. It's like SourceForge, only less useful.

Most of what goes in is tuples of the form (relation item1 item2); stuff like this:

(above ceiling floor)
(above roof ceiling)
(above floor foundation)
(in door wall)
(in window wall)
...

The idea is supposed to be that if you put in enough such "facts", intelligence will somehow emerge. The Cyc crowd has been doing that for twenty years, and it hasn't led to much.

The classic paper on why this idea is bogus is "Artificial Intelligence Meets Natural Stupidity", by Drew McDermott. That was written in 1974, and it's still relevant. There are plenty of citations of this paper on the Web; if anyone can find the full text, please provide a link.

If something like this ever works, it will probably look more like Bayesian statistics than deductive logic.

Load More Comments
Slashdot Login

Need an Account?

Forgot your password?