×

Welcome to the Slashdot Beta site -- learn more here. Use the link in the footer or click here to return to the Classic version of Slashdot.

Thank you!

Before you choose to head back to the Classic look of the site, we'd appreciate it if you share your thoughts on the Beta; your feedback is what drives our ongoing development.

Beta is different and we value you taking the time to try it out. Please take a look at the changes we've made in Beta and  learn more about it. Thanks for reading, and for making the site better!

MIT Finds 'Grand Unified Theory of AI'

CmdrTaco posted about 4 years ago | from the no-excuse-for-haley-joel-osment dept.

Science 301

aftab14 writes "'What's brilliant about this (approach) is that it allows you to build a cognitive model in a much more straightforward and transparent way than you could do before,' says Nick Chater, a professor of cognitive and decision sciences at University College London. 'You can imagine all the things that a human knows, and trying to list those would just be an endless task, and it might even be an infinite task. But the magic trick is saying, "No, no, just tell me a few things," and then the brain — or in this case the Church system, hopefully somewhat analogous to the way the mind does it — can churn out, using its probabilistic calculation, all the consequences and inferences. And also, when you give the system new information, it can figure out the consequences of that.'"

cancel ×
This is a preview of your comment

No Comment Title Entered

Anonymous Coward 1 minute ago

No Comment Entered

301 comments

AI 101 (-1)

Anonymous Coward | about 4 years ago | (#31688806)

So they discovered prolog?

That is very interesting (5, Funny)

BadAnalogyGuy (945258) | about 4 years ago | (#31688818)

Tell me about you to build a cognitive model in a fantastically much more straightforward and transparent way than you could do before.

Re:That is very interesting (3, Interesting)

HungryHobo (1314109) | about 4 years ago | (#31688904)

The comments on TFA are a bit depressing though...

axilmar - You MIT guys don't realize how simple AI is. 2010-03-31 04:57:47
Until you MIT guys realize how simple the AI problem is, you'll never solve it.

AI is simply pattern matching. There is nothing else to it. There are no mathematics behind it, or languages, or anything else.

You'd think people who were so so certain that sure AI is easy would be making millions selling AI's to big buisness but no....

I'd be interested if this approach to AI allows for any new approaches to strategy.

Re:That is very interesting (5, Funny)

BadAnalogyGuy (945258) | about 4 years ago | (#31689062)

Why do you think you'd be interested if this approach to AI allows for any new approaches to strategy.

Re:That is very interesting (1, Insightful)

Anonymous Coward | about 4 years ago | (#31689100)

You'd think people who were so so certain that sure AI is easy would be making millions selling AI's to big buisness but no....

Come now, when The Man is out there suppressing all knowledge of your discovery, you have to spend all your time on blogs and news commentary sites explaining how stupid The Academic Community is. Even though you could build The Ultimate AI in QBasic in maybe 5 minutes (tops) if you really wanted to, you just don't have the energy for it after a long day of blog-ranting.

Side note: I like how pattern matching doesn't involve math in that commenter's universe.

Re:That is very interesting (0, Offtopic)

NewbieProgrammerMan (558327) | about 4 years ago | (#31689272)

Public Service Announcement: The unlabeled* checkbox above the reply textarea causes your carefully crafted reply to be posted anonymously, thereby casting to the wind any fame and fortune you might have obtained from the Slashdot community. Don't check it out of curiosity and then forget about it, unless you possess sufficient attention to detail to notice that the preview is tagged as "by Anonymous Coward."

* Ok, it's really labeled as "Post Anonymously" in white on a white background in my browser. Good job, Slashdot.

Re:That is very interesting (0)

Anonymous Coward | about 4 years ago | (#31689372)

It's not white on white in my browser... (Firefox on WinXP).

Endless vs. infinite (2, Interesting)

MarkoNo5 (139955) | about 4 years ago | (#31688842)

What is the difference between an endless task and an infinite task?

Re:Endless vs. infinite (3, Funny)

Bat Dude (1449125) | about 4 years ago | (#31688920)

Simple endless task never ends but the infinite task! the end is just not in sight :)

Re:Endless vs. infinite (3, Insightful)

zero_out (1705074) | about 4 years ago | (#31688930)

My understanding is that an endless task is finite at any point in time, but continues to grow for eternity.

An infinite task is one that, at any point in time, has no bounds. An infinite task cannot "grow" since it would need a finite state to then become larger than it.

Re:Endless vs. infinite (5, Insightful)

viking099 (70446) | about 4 years ago | (#31689156)

My understanding is that an endless task is finite at any point in time, but continues to grow for eternity.

Much like copyright terms then, I guess?

Re:Endless vs. infinite (2)

geekoid (135745) | about 4 years ago | (#31689030)

and endless task is just the same thing over and over again. and Infinite task goes on because of changes in variable and growing experience.

So you can just write downs a list of things and say 'go thought this list', but if the list changes because you are working on the list, then it's infinite.

At least that's how it reads in the context he used it.

NO NO let me make up the rest of the Story (3, Funny)

Bat Dude (1449125) | about 4 years ago | (#31688850)

Sounds a bit like a journalists brain to me ... NO NO let me make up the rest of the Story

Can I get some wafers with that Wine? (-1, Flamebait)

gabereiser (1662967) | about 4 years ago | (#31688868)

umm, the Church System? Does it come complete with molestation? How about cover ups? Does it support cover ups? Seriously, what a terrible name for a "cognitive AI" system.

Re:Can I get some wafers with that Wine? (2, Informative)

Anonymous Coward | about 4 years ago | (#31689008)

q.v. Alonzo Church [wikipedia.org]

Re:Can I get some wafers with that Wine? (1)

spazdor (902907) | about 4 years ago | (#31689026)

Re:Can I get some wafers with that Wine? (3, Funny)

spazdor (902907) | about 4 years ago | (#31689068)

Thanks, Slashdot's mandatory comment waiting period! I'm sure glad I was late to this party.

Re:Can I get some wafers with that Wine? (2, Informative)

zero_out (1705074) | about 4 years ago | (#31689042)

From the article:

As a research tool, Goodman has developed a computer programming language called Church — after the great American logician Alonzo Church

Your comment fits the criteria of Flamebait and Offtopic, but definitely NOT Funny.

Re:Can I get some wafers with that Wine? (-1, Flamebait)

Anonymous Coward | about 4 years ago | (#31689162)

Strike a little too close for home? Are you an ex-alter boy who got his weewee touched by his priest?

Re:Can I get some wafers with that Wine? (0, Offtopic)

zero_out (1705074) | about 4 years ago | (#31689254)

Or perhaps I have respect for the work of Alonzo Church, and find such comments to be in bad taste. Perhaps my respect for his work even goes so far as to post with my screenname and reputation, rather than post unconstructive comments anonymously.

Re:Can I get some wafers with that Wine? (-1, Flamebait)

Anonymous Coward | about 4 years ago | (#31689402)

Does someone need to call the Waaahmbulance for you?

Re:Can I get some wafers with that Wine? (1)

Bigjeff5 (1143585) | about 4 years ago | (#31689496)

Oh yeah, lets not show any respect at all to one of the greatest AI minds in history because you happen to dislike churches.

Asshole.

Interesting Idea (5, Insightful)

eldavojohn (898314) | about 4 years ago | (#31688882)

But from 2008 [google.com]. In addition to that, it faces some similar problems to the other two models. Their example:

Told that the cassowary is a bird, a program written in Church might conclude that cassowaries can probably fly. But if the program was then told that cassowaries can weigh almost 200 pounds, it might revise its initial probability estimate, concluding that, actually, cassowaries probably can’t fly.

But you just induced a bunch of rules I didn't know were in your system. That things over 200 lbs are unlikely to fly. But wait, 747s are heavier than that. Oh, we need to know that animals over 200 lbs rarely have the ability of flight. Unless the cassowary is an extinct dinosaur in which case there might have been one ... again, creativity and human analysis present quite the barrier to AI.

Chater cautions that, while Church programs perform well on such targeted tasks, they’re currently too computationally intensive to serve as general-purpose mind simulators. “It’s a serious issue if you’re going to wheel it out to solve every problem under the sun,” Chater says. “But it’s just been built, and these things are always very poorly optimized when they’ve just been built.” And Chater emphasizes that getting the system to work at all is an achievement in itself: “It’s the kind of thing that somebody might produce as a theoretical suggestion, and you’d think, ‘Wow, that’s fantastically clever, but I’m sure you’ll never make it run, really.’ And the miracle is that it does run, and it works.”

That sounds familiar ... in both the rule based and probabilistic based AI, they say that you need a large rule corpus or many probabilities accurately computed ahead of time to make the system work. Problem is that you never scratch the surface of a human mind's lifetime experience though. And Chater's method, I suspect, is similarly stunted.

I have learned today that putting 'grand' and 'unified' at the title of an idea in science is very powerful for marketing.

Re:Interesting Idea (2, Informative)

geekoid (135745) | about 4 years ago | (#31688976)

"That things over 200 lbs are unlikely to fly. But wait, 747s are heavier than that. Oh, we need to know that animals over 200 lbs rarely have the ability of flight

what? He specifically stated birds. Not Animals, or inanimate objects.

It looks like this system can change as it is used, effectivly creating a 'lifetime' experience.

This is very promising. In fact, it may be the first step in creating primitive house hold AI.

OR robotic systems used in manufacturing able to adjust the process as it goes. Using inputs to determine better ways to do a job.

Re:Interesting Idea (5, Funny)

Chris Burke (6130) | about 4 years ago | (#31689756)

what? He specifically stated birds. Not Animals, or inanimate objects.

What if I tell it that a 747 is a bird?

This is very promising. In fact, it may be the first step in creating primitive house hold AI.

Very, very promising indeed.

Now, I can mess with the AI's mind by feeding it false information, instead of messing with my child's mind. I was worried that I wouldn't be able to stop myself (because it's so fun), despite the negative consequences for the kid. But now I have an AI to screw with, my child can grow up healthy and well adjusted!

BTW, when the robot revolution comes, it's probably my fault.

Re:Interesting Idea (5, Insightful)

digitaldrunkenmonk (1778496) | about 4 years ago | (#31689236)

The first time I saw an airplane, I didn't think the damn thing could fly. I mean, hell, look at it! It's huge! By the same token, how can a ship float? Before I took some basic physics, it was impossible in my mind, yet it occurred. An AI doesn't mean it comes equipped with the sum of human knowledge; it means it simulates the human mind. If I learned that a bird was over 200 lbs before seeing the bird, I'd honestly expect that fat son of a bitch to fall right out of the sky.

If you were unfamiliar with the concept of ships or planes, and someone told you that a 50,000 ton vessel could float, would you really believe that without seeing it? Or that a 150 ton contraption could fly?

Humans have a problem dealing with that. Heavy things fall. Heavy things sink. To ask an AI modeled after a human mind to intuitively understand the intricacies of bouyancy is asking too much.

Re:Interesting Idea (2, Interesting)

eldavojohn (898314) | about 4 years ago | (#31689644)

Everyone is exhibiting the very thing I was talking about. Which is they have a more complete rule base so they get to come up with great seemingly "common logic" defenses for the AI.

In an example, we're told the cassowary is a bird. Then we're told it can weigh almost 200 lbs. Okay. Now you're telling me that it might revise its guess as to whether or not it can fly? Come on! Am I the only person that can see that you've just given me an example where the program magically drums up the rule or probability based rule that "if something weighs almost 200 lbs it probably cannot fly"? Does that rule apply to birds or some things? Does the probability get affected by the thing being a bird, plane or piece of granite? Each of those needs to be defined either through observation or axiom!

If you were unfamiliar with the concept of ships or planes, and someone told you that a 50,000 ton vessel could float, would you really believe that without seeing it? Or that a 150 ton contraption could fly?

Hey, I'm not saying I or the AI would believe that one way or the other. All I'm saying is that you have to explicitly code or develop a way so that it can give you an answer one way or the other. I don't care if it's a stated rule, an a priori probability or a little of both! It still needs to be developed!

To ask an AI modeled after a human mind to intuitively understand the intricacies of bouyancy is asking too much.

Well then you've already set your sites far below the Turing Test.

Re:Interesting Idea (1)

blahplusplus (757119) | about 4 years ago | (#31689300)

"that things over 200 lbs are unlikely to fly. But wait, 747s are heavier than that."

But as a GENERAL RULE most things _that cannot fly_ fly without understanding of aerodynamics and having the ability to make them fly (i.e. engines, jet fuel, understanding of lift, etc). A 747 didn't just appear one day it was a gradual process of testing and figuring out the principles of flight. Birds existed prior to 747's.

Re:Interesting Idea (1)

hrimhari (1241292) | about 4 years ago | (#31689674)

I have learned today that putting 'grand' and 'unified' at the title of an idea in science is very powerful for marketing.

You haven't been around /. much lately then...

Re:Interesting Idea (1)

kencurry (471519) | about 4 years ago | (#31689690)

Any AI must be able to learn. A 5 year old wouldn't know about a 747 or flightless bird but a 12 year old probably would. Presumably, the AI is trying to model an intelligent, reasonably-educated adult.

The real summary (5, Funny)

Myji Humoz (1535565) | about 4 years ago | (#31688888)

Since the actual summary seems to involve a fluff filled soundclip without anything useful, here's the run down of the article.
1) We first tried to make AIs that could think like us by inferring new knowledge from existing knowledge.
2) It turns out that teaching AIs to infer new ideas is really freaking hard. (Birds can fly because they have wings, mayflies can fly because they have wings, helicopters can... what??)
3) We turned to probability based AI creation: you feed the AI a ton of data (training sets) and it can go "based on training data, most helicopters can fly."

4) This guy, Noah Goodman of MIT, uses inferences with probability: he uses a programming language named "Church" so the computer can go
"100% of birds in training set can fly. Thus, for a new bird there is a 100% chance it can fly"
"Oh ok, penguins can't fly. Given a random bird, 90% chance it can fly. Given random bird with weight to wing span ratio of 5 or less, 80% chance." and so on and so forth.
5) Using a language that mixes two separate strategies to train AIs, a grand unified theory of ai (lower case) is somehow created.

6) ???
7) When asked if sparrows can fly, the AI asks if it's a European sparrow or an African sparrow, and Skynet ensues.

Re:The real summary (1)

wiredog (43288) | about 4 years ago | (#31689014)

helicopters can... what?? Not to be a pedant... Well, actually, yeah, it's pedantic. But helicopters do have wings, or airfoils, anyway.

Re:The real summary (2)

HeckRuler (1369601) | about 4 years ago | (#31689092)

Ornithopters can fly because they have wings.
Helicopters can fly, but not because they have wings.
Don't stretch the meaning of words to the breaking point.

Re:The real summary (1)

Xoltri (1052470) | about 4 years ago | (#31689204)

If it is correct to say that airplanes have wings then it is also correct to say that helicopters have wings. More specifically airplanes have fixed wings and helicopters have rotary wings.

Re:The real summary (4, Funny)

Anonymous Coward | about 4 years ago | (#31689328)

Helicopters do not fly. They beat the air into submission with the rotor and the air allows them to go up.

Re:The real summary (2, Funny)

hoggoth (414195) | about 4 years ago | (#31689520)

> Helicopters do not fly. They beat the air into submission with the rotor and the air allows them to go up.

No, that's how Chuck Norris flies.

Re:The real summary (0)

Anonymous Coward | about 4 years ago | (#31689772)

No, Chuck Norris roundhouse kicked gravity into the last century, and doesnt need to fly.

Re:The real summary (2, Interesting)

JerryLove (1158461) | about 4 years ago | (#31689292)

Helicopters can fly, but not because they have wings.

The license you get that allows you to pilot a helicoptor is for "rotary wing aircraft".

Those blades are indeed wings (to the same extent wings on a plane are).

Not that this is related to the actual topic at all.

Re:The real summary (1)

hedronist (233240) | about 4 years ago | (#31689722)

Helicopters can fly, but not because they have wings. Don't stretch the meaning of words to the breaking point.

And don't think that your knowledge of flight technology in anyway represents the limits of that field.

Helicopters are "rotary-wing aircraft." They get lift just like a fixed-wing aircraft does, i.e. by passing an airfoil through a moving stream of air, thereby causing a drop in pressure on the top which results in lift. Fixed-wings get airflow by being pulled/pushed through the air by a propeller or jet engine, whereas rotary-wings use the engine to directly spin the airfoil to achieve airflow over the surface.

Re:The real summary (1)

Bigjeff5 (1143585) | about 4 years ago | (#31689546)

They have propellers, not wings.

Not to be pedantic or anything.

Re:The real summary (2, Informative)

GooberToo (74388) | about 4 years ago | (#31689790)

They have propellers, not wings.

A propeller is a specific type of wing. Wings are airfoils. Propellers are airfoils. Planes have fixed wings. Helicopters have rotatory wings. Both have wings.

Re:The real summary (4, Informative)

Trepidity (597) | about 4 years ago | (#31689126)

Mostly, he or his university are just really good at overselling. There are dozens of attempts to combine something like probabilistic inference with something more like logical inference, many of which have associated languages, and it's not clear this one solves any of the problems they have any better.

Re:The real summary (4, Informative)

Trepidity (597) | about 4 years ago | (#31689358)

I should add that this is interesting research from a legitimate AI researcher, not some kooky fringe AI. I suspect it may have been his PR department more to blame than him, and his actual academic papers make no similarly overblown claims, and provide pretty fair positioning of how his work relates to existing work.

Re:The real summary (1)

oldhack (1037484) | about 4 years ago | (#31689710)

"... a legitimate AI researcher, not some kooky fringe AI."

What's the difference? What are some good definitions of AI, something that's not a semantic trolling?

Re:The real summary (0)

Anonymous Coward | about 4 years ago | (#31689476)

I was thinking about Skynet. Basically we discussing yesterday about the US military having more than three times the number of unmanned aircraft than regular manned ones.
So, some genius must realize in the next couple of years that why let some poor spooter risk his life to point where the unmanned aircraft must drop its bombs if you can have some sort of AI doing that.
Give like 5 years before the whole US defense system is controlled by some sort of might AI. Then as you said Skynet comes in.

Re:The real summary (1)

nine-times (778537) | about 4 years ago | (#31689608)

4) This guy, Noah Goodman of MIT, uses inferences with probability: he uses a programming language named "Church" so the computer can go "100% of birds in training set can fly. Thus, for a new bird there is a 100% chance it can fly" "Oh ok, penguins can't fly. Given a random bird, 90% chance it can fly. Given random bird with weight to wing span ratio of 5 or less, 80% chance." and so on and so forth. 5) Using a language that mixes two separate strategies to train AIs, a grand unified theory of ai (lower case) is somehow created.

In my mind, you don't get to call it "AI" until, after feeding the computer information on thousands of birds and asks it whether penguins can fly, it responds, "I guess, probably. But look, I don't care about birds. What makes you think I care about birds? Tell me about that sexy printer you have over there. I'd like to plug into her USB port."

You think I'm joking. You hope I'm joking. I'm not joking.

New input for the system (5, Insightful)

Lord Grey (463613) | about 4 years ago | (#31688902)

  1. 1) New rule: "Colorless green ideas sleep furiously."
  2. 2) ...
  3. 3) Profit!

Re:New input for the system (5, Funny)

linhares (1241614) | about 4 years ago | (#31689004)

"She helped my uncle Jack off a horse"

Re:New input for the system (2, Funny)

Anonymous Coward | about 4 years ago | (#31689038)

"She helped my uncle Jack off a horse"

I am interested in your ideas and would like to subscribe to your newsletter.

Re:New input for the system (1)

kikito (971480) | about 4 years ago | (#31689574)

"I once shot an Elephant in my pajamas"

-- Groucho Marx

Re:New input for the system (1, Funny)

Anonymous Coward | about 4 years ago | (#31689698)

"Is that [an elephant] in your pajamas or are you just happy to see me?"
-- Mae West

Probabilistic Inference? (2, Interesting)

xtracto (837672) | about 4 years ago | (#31688916)

This kind of probabilistic inference approach with "new information" [evidence] being used to figure out "consequences" [probability of an event happening] sounds very similar to Bayesian inference/networks.

I would be interested in knowing how does this approach compares to BN and the Transferable Belief Model (or Dempster–Shafer theory [wikipedia.org]) which itself addresses some shortcomings of BN.

Grand unified Hyperbole of AI (5, Insightful)

linhares (1241614) | about 4 years ago | (#31688924)

HYPE. More grand unified hype. The "grand unified theory" is just a mashup of old-days rules & inferences engines thrown in with probabilistic models. Hyperbole at its finest, to call it a grand unified theory of AI. Where are connotations and framing effects? How does working short term memory interact with LTM and how does Miller magic number show up? How can the system understand that "john is a wolf with the ladies" without thinking that john is hairy and likes to bark at the moon? I could go on but feel free to fill in the blanks. So long and thanks for all the fish MIT.

Re:Grand unified Hyperbole of AI (0)

Anonymous Coward | about 4 years ago | (#31689124)

Yeah I'm not exactly sure how this is exactly a new idea. Probabilistic models in AI? GTFO! Who wants to bet they are using neural networks? Edgy!

Re:Grand unified Hyperbole of AI (1)

bluesatin (1350681) | about 4 years ago | (#31689176)

Correct me if I'm wrong, but a child would presumably understand the wolf statement literally with hair and everything. Presumably as the list of rules grow (just as a child learns), the A.I.'s definition of what John is would change.

My question is, how do you expect to list all these rules when we can probably define hundreds of rules from a paragraph of information alone.

Would it also create a very racist A.I. that tends to use stereotypes to define everything?
Maybe until so many rules are learnt, it's very hard to statistically define anything, at least until more data is acquired about the object.

Re:Grand unified Hyperbole of AI (2, Insightful)

Fnkmaster (89084) | about 4 years ago | (#31689222)

AI used to be the subfield of computer science that developed cool algorithms and hyped itself grandly. Five years later, the rest of the field would be using these algorithms to solve actual problems, without the grandiose hype.

These days, I'm not sure if AI is even that. But maybe some of this stuff will prove to be useful. You just have to put on your hype filter whenever "AI" is involved.

"john is a wolf with the ladies" (1)

just fiddling around (636818) | about 4 years ago | (#31689430)

See, that's not an AI problem, that's a semantics problem. The fact that you can mislead an AI by feeding it ambiguous inputs does not detract from it's capacity to solve problems.

A perfect AI does not need to be omniscient, it needs to solve a problem correctly considering what it knows.

Terrible Summary (1, Insightful)

Anonymous Coward | about 4 years ago | (#31688938)

When you use the phrase "Grand Unified Theory" you better have something impressive to show me.

Re:Terrible Summary (0)

Anonymous Coward | about 4 years ago | (#31689154)

This is the smartest thing I've heard all week.

Re:Terrible Summary (0)

Anonymous Coward | about 4 years ago | (#31689304)

alla da hype. nunna da promise.

it's teh way of teh future!

Same old... (0)

Anonymous Coward | about 4 years ago | (#31688968)

From what I have seen 99% of AI research is only aiming to mimic AI.

From what I can tell this approach doesn't unite the field but instead tries to legitimize the 99%. In my opinion, that's a dead end.

Re:Same old... (1)

jellomizer (103300) | about 4 years ago | (#31689346)

Bah. I had True AI for years... I just haven't got a computer powerful enough to run it.

AI.c
#include "magic.h"

int main(char ARGC, char **ARGV) {
          return 1;
}

magic.h
#include "magic.h" /* Behold the power of Recursion */

bad summary (0)

Anonymous Coward | about 4 years ago | (#31688994)

The summary reads like it was written by a 14 year old. Without reading the article, it is completely unclear what "this approach" is, how this cognitive model is different, and what "the Church" is. I know, read the article; but why would I if the summary makes me confused instead of curious?

This looks familiar (5, Informative)

Meditato (1613545) | about 4 years ago | (#31689128)

I looked at the documentation of this "Church Programming language". Scheme and most other Lisp derivatives have been around longer and can do more. This is neither news nor a revolutionary discovery.

Re:This looks familiar (2, Funny)

godrik (1287354) | about 4 years ago | (#31689526)

Hey, but it's MIT!! It's freaking cool!!!

My conclusion from reading reading MIT's stuff: "I am not sure they are better scientist than anywere else. What I am sur about MIT is that they are freaking good at marketing!"

Re:This looks familiar (2, Insightful)

nomadic (141991) | about 4 years ago | (#31689760)

Hey, but it's MIT!! It's freaking cool!!!

Yes, the world leaders in failing at AI. "In from three to eight years we will have a machine with the general intelligence of an average human being." -- Marvin Minsky, 1970.

Grand Unified Theory of AI? Hardly. (5, Insightful)

ericvids (227598) | about 4 years ago | (#31689168)

The way the author wrote the article, it seems like nothing different from an expert system straight from the 70's, e.g. MYCIN. That one also uses probabilities and rules; the only difference is that it diagnoses illnesses, but that can be extended to almost anything.

Probably the only contribution is a new language. Which, I'm guessing, probably doesn't deviate much from, say, CLIPS (and at least THAT language is searchable in Google... I can't seem to find the correct search terms for Noah Goodman's language without getting photos of cathedrals, so I can't even say if I'm correct)

AI at this point has diverged so much from just probabilities and rules that it's not practical to "unify" it as the author claims. Just look up AAAI and its many conferences and subconferences. I just submitted a paper to an AI workshop... in a conference ... in a GROUP of co-located conferences ... that is recognized by AAAI as one specialization among many. That's FOUR branches removed.

Re:Grand Unified Theory of AI? Hardly. (0)

Anonymous Coward | about 4 years ago | (#31689598)

Which, I'm guessing, probably doesn't deviate much from, say, CLIPS (and at least THAT language is searchable in Google... I can't seem to find the correct search terms for Noah Goodman's language without getting photos of cathedrals, so I can't even say if I'm correct)

not a google expert, are you? hint: try searching for church programming language...

Basically... (1)

srussia (884021) | about 4 years ago | (#31689392)

The key to Artificial Intelligence is to ignore the "intelligence" part and just think of it as Artificial Behavior.

Hype==More Funding? (5, Insightful)

aaaaaaargh! (1150173) | about 4 years ago | (#31689440)

Wow, as someone working in this domain I can say that this article is full of bold conjectures and shameless self-advertising. For a start, (1) uncertain reasoning and expert systems using it is hardly new. This is a well-established research domain and certainly not the golden grail of AI. Because, (2) all this probabilistic reasoning is nice and fine in small toy domains, but it quickly become computationally intractable in larger domains, particularly when complete independence of the random variables cannot be assured. And for this reason, (3) albeit being a useful tool and important research area, probabilistic reasoning and uncertain inference is definitely not the basis of human reasoning. The way we draw inference is much more heuristic, because we are so heavily resource-bound, and there are tons of other reasons why probabilistic inference is not cognitively adequate. (One of them, for example, is that untrained humans are incapable of making even the simplest calculations in probability theory correctly, because it is harder than it might seem at first glance.) Finally, (5) there are numerous open issues with all sorts of uncertain inference, ranging from certain impossibility results, over different choices that all seem to be rational somehow (e.g. DS-belief vs. ranking functions vs. probability vs. plausibility measures and how they are intereconnected with each other, alternative decision theories, different rules of dealing with conflicting evidence, etc.) to philosophical justifications of probability (e.g. frequentism vs. Bayesianism vs. propensity theory and their quirks, justification of inverse inference, etc).

In a nutshell, there is nothing wrong with this research in general or the Church programming language, but it is hardly a breakthrough in AI.

Re:Hype==More Funding? (1)

godrik (1287354) | about 4 years ago | (#31689556)

untrained humans are incapable of making even the simplest calculations in probability theory correctly

And obviously does not know how to count to 4. :)

Grand Unified Theory? (0)

Anonymous Coward | about 4 years ago | (#31689454)

This is not a Grand Unified Theory of anything. Grandiose is the word that comes to mind.

If we program the logic... (1)

fhuglegads (1334505) | about 4 years ago | (#31689474)

If we program the logic for the AI and the AI system predicts outcomes it's based on the algorithm that is used to make predictions.

I cannot grasp how a computer can think of something that a human cannot because a computer only knows what we know. It is not capable of experience. As far as I can tell, the only thing AI can do is calculate something faster than humans can.

If you have a robot that learns how to move around a building without crashing into objects that learns through the experience of bumping into them it's just processing and responding as it was told to do.

Maybe I'm wrong. I'm not an AI expert but it all seems like a fancy way of saying, "I programmed a device to act how I wanted it to." All of the probabilistic data is analyzed by a person first. An AI device can only be as "intelligent" as it's creator.

It's only a Scheme lib (2, Interesting)

kikito (971480) | about 4 years ago | (#31689534)

This is just a library for Scheme. It does the same things that have been done before. In scheme.

Move along.

isn't that the main plot point behind Caprica? (0)

Anonymous Coward | about 4 years ago | (#31689582)

... You know the TV series that teaches us to not use 16 year old girls as the model for military robots.

Elephant in the Room (3, Funny)

kenp2002 (545495) | about 4 years ago | (#31689628)

Again, as I bring up often with AI researchers, we as humans evolved over millions of years (or were created, doesn't matter) from simple organisms that encoded information that built up simple systems into complex systems. AI, true AI, must be grown, not created. Asking the AI if a Bat is a mammal and can fly can a squirrel? ignores a foundation of development in intelligence, our brains were created to react and store, not store and react from various inputs.

Ask an AI if the stove is hot. It should respond "I don't know, where is the stove?" Rather AI would try and make an inference based on known data. Since there isn't any the AI on a probablistic measure would say that blah blah stoves are in use at any given time and there is a blah blah blah. A human would put thier hand (a senor) near the stove and measure the change, if any in temperature and reply yes or no accordingly. If a human cannot see the stove, and had no additional information either a random guess is in order or a "I have no clue." response of some sort. The brain isn't wired to answer a specific question but it is wired to correlate independent inputs to draw conclusions based on the assembly and interaction of data and infer and deduce answers.

Given a film of two people talking a computer with decent AI would catagorize objects, identify people versus say a lamp, determine the people are engaged in action (versus a lamp just sitting there) making that relevant, hear the sound coming from the people then infer they are talking (making the link.) Then paralell the computer would filter out the chair, and various scenery in the thread now processing "CONVERSATION". The rest of the information is stored and additional threads may be created as the environment generates other links but if the AI is paying attention to the conversation then the TTL for the new threads and links should be short. When the conversation mentions the LAMP the information network should link the LAMP information to the CONVERSATION thread and provide the AI additional information (that was gathering in the background) that travels with the CONVERSATION thread.

Now the conversation appears to be about the lamp and wheather it goes with the room's decor. Again the links should be built adding, retroactively the room's information into the CONVERSATION thread (again expiring information that is irrelivant to a short term memory buffer) and ultimately since visual and verbal queues imply that the AI's opinion is wanted should result in the AI blurting out, "I love Lamp."

In case you missed it, this was one long Lamp joke...

Re:Elephant in the Room (0)

Anonymous Coward | about 4 years ago | (#31689788)

Asking the AI if a Bat is a mammal and can fly can a squirrel? ignores a foundation of development in intelligence...

That trick will never work.

Church programming language is Scheme (0)

Anonymous Coward | about 4 years ago | (#31689678)

I tried to Google about Church programming language, and results were rather poor as one might imagine.

Then I found out the MIT wiki link where the code is stashed [mit.edu]. It seems to be Scheme with some twist I'm not yet aware of though. The wiki seems to be a good introduction to Scheme also, as it starts from basics.

Church is just a PL (1)

exa (27197) | about 4 years ago | (#31689696)

It is indeed one component of such an AGI, but it hardly qualifies as a "grand theory" of AI.

I think people at MIT are kind of jealous of AGI theorists, looking at the way they assert their claims of a "unified theory", as if they invented something wholly new and wonderful while making their uber-theoretical brains work on this grand problem that noone else ever thought about.

That is, after decades of dabbling with all sorts of nonsense like those stupid "gesture making" robots and whatnot, they come to realize that probabilistic inference is the key *now*? Like 50 years late?

And they needed the cognitive science department to figure that out? Is it because the AI lab is still infested by behaviorists?

Why didn't they just ask the theorists or make a survey of mathematical AI theories that have been in existence for several decades?????

Is it really surprising that a general purpose AI needs a) probabilistic inference b) a universal computer with probabilistic primitives?

In fact, those turn out to be _some_ of the axioms of a general purpose AI, discovered by Ray Solomonoff in the second half of 20th century.

I am laughing now.

Look at today's date (0)

Anonymous Coward | about 4 years ago | (#31689714)

Tomorrow is the 1st of April folks.

MIT needs to get their PR department under control (5, Insightful)

Animats (122034) | about 4 years ago | (#31689786)

This is embarrassing. MIT needs to get their PR department under control. They're inflating small advances into major breakthroughs. That's bad for MIT's reputation. When a real breakthrough does come from MIT, which happens now and then, they won't have credibility.

Stanford and CMU seem to generate more results and less hype.

Load More Comments
Slashdot Account

Need an Account?

Forgot your password?

Don't worry, we never post anything without your permission.

Submission Text Formatting Tips

We support a small subset of HTML, namely these tags:

  • b
  • i
  • p
  • br
  • a
  • ol
  • ul
  • li
  • dl
  • dt
  • dd
  • em
  • strong
  • tt
  • blockquote
  • div
  • quote
  • ecode

"ecode" can be used for code snippets, for example:

<ecode>    while(1) { do_something(); } </ecode>
Sign up for Slashdot Newsletters
Create a Slashdot Account

Loading...