×

Welcome to the Slashdot Beta site -- learn more here. Use the link in the footer or click here to return to the Classic version of Slashdot.

Thank you!

Before you choose to head back to the Classic look of the site, we'd appreciate it if you share your thoughts on the Beta; your feedback is what drives our ongoing development.

Beta is different and we value you taking the time to try it out. Please take a look at the changes we've made in Beta and  learn more about it. Thanks for reading, and for making the site better!

IQ Test Pegs ConceptNet 4 AI About As Smart As a 4-Year-Old

timothy posted about 9 months ago | from the tell-me-when-it-can-hide-in-a-cabinet-for-fun dept.

AI 121

An anonymous reader writes "Artificial and natural knowledge researchers at the University of Illinois at Chicago have IQ-tested one of the best available artificial intelligence systems to see how intelligent it really is. Turns out–it's about as smart as the average 4-year-old. The team put ConceptNet 4, an artificial intelligence system developed at M.I.T., through the verbal portions of the Wechsler Preschool and Primary Scale of Intelligence Test, a standard IQ assessment for young children. They found ConceptNet 4 has the average IQ of a young child. But unlike most children, the machine's scores were very uneven across different portions of the test." If you'd like to play with the AI system described here, take note of the ConceptNet API documentation, and this Ubuntu-centric installation guide.

cancel ×
This is a preview of your comment

No Comment Title Entered

Anonymous Coward 1 minute ago

No Comment Entered

121 comments

More like autistic-savant 4 year old (5, Insightful)

schneidafunk (795759) | about 9 months ago | (#44317231)

From the article: “If a child had scores that varied this much, it might be a symptom that something was wrong,” said Robert Sloan, professor and head of computer science at UIC, and lead author on the study.

Re:More like autistic-savant 4 year old (5, Funny)

Metabolife (961249) | about 9 months ago | (#44317259)

Similar to how a typical Slashdot user might score amazingly well on the math section and then never score in real life?

Re:More like autistic-savant 4 year old (1)

noh8rz8 (2716593) | about 9 months ago | (#44317551)

this just in - iq test pegs noh8rz as a toddler, other commenters not specified.

Re:More like autistic-savant 4 year old (3, Funny)

Anonymous Coward | about 9 months ago | (#44317869)

Damn it, I'm not even good at slashdot reading - I score bad at math and don't get laid! :(

Re:More like autistic-savant 4 year old (4, Funny)

Vanderhoth (1582661) | about 9 months ago | (#44317293)

Obligatory, I for one welcome our new four year old mentally unstable electronic overlords.

Re:More like autistic-savant 4 year old (0)

Anonymous Coward | about 9 months ago | (#44318403)

Nope, it won't become overlord till it reaches the IQ of the average large corp CEO.

Re:More like autistic-savant 4 year old (2, Funny)

Anonymous Coward | about 9 months ago | (#44317443)

Of course you have to compare it to a child. How else will it ever ascend to becoming a contestant on "Are You Smarter Than A 5th Grader?"

Re: More like autistic-savant 4 year old (0)

Anonymous Coward | about 9 months ago | (#44317541)

Mod this up. Funniest line for a Thursday morning

Re:More like autistic-savant 4 year old (4, Interesting)

Anonymous Coward | about 9 months ago | (#44317627)

It is indeed an indicator of something wrong. Can you imagine an AI with the mores and intelligence of a small child with access to vast amounts of information? Have you ever seen a small child in the store? I HATE you mommy!. That that emotional maturity and intelligence and that mommy would be dead. Machines like this will be dangerous.

Re:More like autistic-savant 4 year old (3, Insightful)

Anonymous Coward | about 9 months ago | (#44319283)

Being as intelligent as a 4 year old human on an IQ test is not even remotely related to having the learning abilities of a 4 year old human.

Re:More like autistic-savant 4 year old (1)

Jane Q. Public (1010737) | about 9 months ago | (#44318983)

"From the article: âoeIf a child had scores that varied this much, it might be a symptom that something was wrong,â said Robert Sloan, professor and head of computer science at UIC, and lead author on the study."

Yes, exactly. A 4 year old, maybe. But a severely mentally damaged 4 year old.

Re:More like autistic-savant 4 year old (1)

zifn4b (1040588) | about 9 months ago | (#44319353)

I need to get out of IT before I have to support one of these "intelligent" systems when it has a temper tantrum!

Re:More like autistic-savant 4 year old (1)

Jeremiah Cornelius (137) | about 9 months ago | (#44319549)

Bullshite.

It just goes to show you what a pile of crap Weschler and Stanford-Binet are. And this is from someone who has a bigger estimated score than my UID. Its bollocks.

Any unimpaired four-year-old knows not to eat feces. That's not measurable through the bias of IQ, and I doubt these "nets" know the difference between that... and Shinola.

Sample output when tested (5, Funny)

Anonymous Coward | about 9 months ago | (#44317261)

No!

Re:Sample output when tested (1)

Anonymous Coward | about 9 months ago | (#44318107)

Why?

Re:Sample output when tested (0)

Anonymous Coward | about 9 months ago | (#44318305)

I win!

But (2)

Sparticus789 (2625955) | about 9 months ago | (#44317289)

Does the AI use contractions?

Re:But (1)

Vanderhoth (1582661) | about 9 months ago | (#44317319)

I never really understood Data's issue with using contractions. There are pretty well defined rules for them and rules are something that can be pretty easily programmed so he really shouldn't have had an issue with them.

Re:But (1)

Sparticus789 (2625955) | about 9 months ago | (#44317447)

He does use them in the series finale, in the alternate future timeline. Really, I think it was more a plot device to make sure the crew could distinguish him from Lore, and to be brought up on occasion. Like the episode where Riker is taken hostage by an alien on the surface that wants him to be his Dad, and during a hallucination Data uses a contraction, which served as the final piece of evidence that the events going on were not real.

Re:But (1)

mhajicek (1582795) | about 9 months ago | (#44317811)

He should have been able to whistle too.

Re:But (2)

TWiTfan (2887093) | about 9 months ago | (#44317915)

The whistling thing I could understand, since he presumably had no respiration.

Re:But (0)

Anonymous Coward | about 9 months ago | (#44318301)

Then how did he make any other sound?

Re:But (1)

RobinH (124750) | about 9 months ago | (#44318687)

Hmm, a speaker? You think he really had vocal chords?

Re:But (0)

Anonymous Coward | about 9 months ago | (#44318873)

Well, he claimed was “fully functional” in other areas that are not strictly necessary for robots. Who knows what other parts of the human body his creator thought necessary ;)

Re:But (2)

RobinH (124750) | about 9 months ago | (#44318711)

The guy who built him deliberately made him *less* lifelike than Lore because the colonists didn't like how eerily human-like Lore was. His inability to use contractions was one of these things.

Re:But (1)

Vanderhoth (1582661) | about 9 months ago | (#44319303)

Thank you. That makes more sense and I remember the episode where that's explained now. I guess it's time to hand in my TNG card.

Re:But (1)

kenj0418 (230916) | about 9 months ago | (#44318893)

I never really understood Data's issue with using contractions.

I thought it was a deliberate inability programmed in by his creator to make Data less like Lor and to make him feel less threatening to the other colonists..

Re:But (1)

ebno-10db (1459097) | about 9 months ago | (#44317361)

No, but at least it doesn't hang when you ask it the value of pi.

Re:But (4, Funny)

leonardluen (211265) | about 9 months ago | (#44317413)

for a 4 year old i am pretty sure the value of pi is "more" and possibly "with ice cream"

Re:But (2, Funny)

Laxori666 (748529) | about 9 months ago | (#44317857)

No you see, by pi he meant the mathematical number - 3.1415... - not pie as in "food". Even so, the value of pie wouldn't be "more" and "with ice cream" - that's not the value of something, those are desires and descriptors of something. Value would be more like how many dollars it's worth or how many good deeds/grades one has to get to receive the pie.

Re:But (1)

elfprince13 (1521333) | about 9 months ago | (#44318187)

I guess it was too early to expect that the researchers would have developed a reliable humor-recognition heuristic.

Re:But (2)

snadrus (930168) | about 9 months ago | (#44319259)

To be accurate, it would need
the same input inaccuracies (pi and pie verbally being the same), combined with
the order of learned experiences which influences weighting (pie before pi), combined with
a 4-year-old's limited capacity for context, combined with
need (eat) & desire (sweet foods).

We have a ways to go.

Re:But (3, Funny)

Laxori666 (748529) | about 9 months ago | (#44317865)

That's how we lost ol' Timmy. Asked him to give us a few digits of that good ol' pi and he just durn went and hung himself right in the ol' barn. We stopped teachin' the young'uns maths after that lil' incident.

Re:But (0)

Anonymous Coward | about 9 months ago | (#44317935)

I think you are confusing this with a different AI [wikipedia.org]

Misleading crap (5, Insightful)

Anonymous Coward | about 9 months ago | (#44317303)

We are nowhere near getting an AI that can navigate the world at the level of a 4 year old. All the program can do is simple tasks in vocabulary and such with no real understanding of those words. Nothing to see here.

Re:Misleading crap (4, Funny)

Doug Otto (2821601) | about 9 months ago | (#44317321)

Sounds like my ex-wife.

Re:Misleading crap (1, Insightful)

Anonymous Coward | about 9 months ago | (#44317377)

And you married her. How smart does that make you? About as smart as someone linking his FaceBook account to his Slashdot account.

Re:Misleading crap (1)

Russ1642 (1087959) | about 9 months ago | (#44317339)

A 4 year old is so far ahead of any current AI they shouldn't even think of making this kind of comparison. It would be a Nobel prize winning feat to produce AI that can operate at the level of a 6 month old, let alone something that can walk around, talk, learn, imagine, and play.

Re:Misleading crap (1)

Vanderhoth (1582661) | about 9 months ago | (#44317571)

I had a six month old a little more than a year ago. They don't do much at six months other than eat, sleep and grow, my daughter wasn't even interested in toys until around seven to eight months. All the interesting stuff starts taking place around one year when they start learning things like to crawl/walk and mimic sounds. About 18 months they start listening to simple instructions and saying actual words. Development progress not guaranteed and may vary, side effects may include poop on walls, pink eye, vomiting, cuts, violent tantrums and potential death in small furry animals.

My daughter is 21 months now and is forming sentences, can follow "complex" instructions, count to five, solve puzzles rated for four years old and she knows simple shapes and the alphabet. I'm sure the daycare workers are exaggerating, but they've told us she's actually more mentally developed than the other kids in her class.

Using that as a gauge, simple A.I. can mimic behavior, recognize shapes and sounds and form associations. So I'm pretty sure up to ConceptNet4 most A.I. would be about the equivalent level of an 18 to 24 months old. ConceptNet4 I think is showing abilities that are probably consistent with at least a two year old and up.

Re:Misleading crap (1)

Russ1642 (1087959) | about 9 months ago | (#44317643)

The ability to look around, recognize faces, react to and locate sounds, control the behaviour of parents with crying, and take objects and put them in its mouth as good as a six month old would be Nobel prize winning AI.

Re:Misleading crap (2)

Vanderhoth (1582661) | about 9 months ago | (#44317799)

With the proper sensors they can look around, recognize faces and react to an locate sounds. I'm not sure if giving a machine a mouth so it can shove random objects into it is really Nobel prize worthy...

Re:Misleading crap (1)

bjb_admin (1204494) | about 9 months ago | (#44317665)

Wait a few more years and she may be scripting LUA in Roblox! My 10 year old son has been doing this for the last year with no help from me!

Re:Misleading crap (1)

Vanderhoth (1582661) | about 9 months ago | (#44317707)

Yeah, I'm excited. I was programming at seven when my Dad gave me his old Atari 130XE to play with. I can't wait to see what she'll do since computers are so cheap their practically throw away items now.

Re:Misleading crap (2)

Vanderhoth (1582661) | about 9 months ago | (#44317371)

I think the point is, like in a real child's development this is a stepping stone, it's something A.I. has to go through in order for it to mature.

I never understood why people think a A.I. should learn any faster than a real child could. It's like people think because it's a computer it automagically knows everything there is ever to know, but in reality A.I. still requires training and positive/negative reinforcement just like really children do.

Re:Misleading crap (1)

ebno-10db (1459097) | about 9 months ago | (#44317503)

I agree, but after a program has learned for a while, you can make a copy of it. So if it takes 4 years to teach the program, it doesn't mean it takes 4 years for every copy of the program.

Re:Misleading crap (1)

Vanderhoth (1582661) | about 9 months ago | (#44317667)

I see the issue with copying the A.I. is you end up with the breeding weakness issue. By developing A.I. and not copying it you can create a host of experiences for different version of the A.I. as opposed to making sure all A.I. has the same experiences. That allows the A.I. to be more human in that each individual A.I. can contribute something to a collective that other's haven't experienced. I know it sounds silly, but one may have more success at recognizing an abstract dog (think cartoon) because of a combination of different experiences, while another may have issues with it. If you're just copying the A.I. then the one that has issues recognizing the dog may end up copied over and over and all A.I. will have issues with the task.

Re:Misleading crap (1)

ebno-10db (1459097) | about 9 months ago | (#44317809)

It makes sense that you would want M versions of the program, but once you have some that work well enough for something you want to do, you can still create copies of those versions.

Want more diversity? Get real 4 year olds. Don't forget snacks.

Re:Misleading crap (1)

narcc (412956) | about 9 months ago | (#44318317)

You say that like we have anything remotely like the kind of AI implied by the summary.

We're not even close. Hell, we don't even understand the problem at the simplest level. (Hint: The GP's 6-month-old comment was spot on.)

Re:Misleading crap (1)

Jeff Flanagan (2981883) | about 9 months ago | (#44317579)

>I never understood why people think a A.I. should learn any faster than a real child could. I think that's because we're used to computers processing data much more quickly than we do. This probably wouldn't be true to early AIs, but we judge things on our personal experience, and we have no experience with AIs, so we go to the closest related thing, the computer. If we could create a super-intelligent AI, it could scan the Internet to bring itself up to speed pretty quickly, and would probably be really into cat pictures.

Re:Misleading crap (5, Insightful)

ebno-10db (1459097) | about 9 months ago | (#44317439)

We are nowhere near getting an AI that can navigate the world at the level of a 4 year old. All the program can do is simple tasks in vocabulary and such with no real understanding of those words. Nothing to see here.

The headline is the usual attention grabbing junk, but the article itself does a decent job of explaining it:

Sloan said ConceptNet 4 did very well on a test of vocabulary and on a test of its ability to recognize similarities.

“But ConceptNet 4 did dramatically worse than average on comprehension—the ‘why’ questions,” he said.

One of the hardest problems in building an artificial intelligence, Sloan said, is devising a computer program that can make sound and prudent judgment based on a simple perception of the situation or facts–the dictionary definition of commonsense.

Commonsense has eluded AI engineers because it requires both a very large collection of facts and what Sloan calls implicit facts–things so obvious that we don’t know we know them. A computer may know the temperature at which water freezes, but we know that ice is cold.

“All of us know a huge number of things,” said Sloan. “As babies, we crawled around and yanked on things and learned that things fall. We yanked on other things and learned that dogs and cats don’t appreciate having their tails pulled. Life is a rich learning environment.”

IQ tests mean little enough for a human being, for AI they're little more than cute. Most 4 year old's know if someone is mad at them (expression, tone of voice, etc.) and, from past experience, often know why someone is mad at them. They're also clever enough to pretend they don't know why someone is mad at them. Most importantly (and practically), they know to start acting cute before somebody kills them. Let me know when an AI program can do that.

P.S. This is not to disparage the AI work, just to keep things in perspective.

Re:Misleading crap (2)

Laxori666 (748529) | about 9 months ago | (#44317831)

Ultimately the source of all our information comes from sensory input. That's how we know ice is cold, that things fall, some things hurt, others are pleasurable, etc. On top of that sensory data we construct an intelligent (symbolic) representation of the world, in tandem with a language (or several languages) which we share with others and can thus use to exchange ideas.

What AI researchers seem to be doing is skipping the sensory input part, because it's hard, and just trying to codify the intelligent representations directly. In light of the above, it's clear why common sense eludes AIs built on this principle.

Perhaps the approach that will ultimately succeed - and I don't see a convincing reason why we won't ultimately be able to build a sentient self-reflective machine which ends up being more intelligent than a human - is to mimic the human developmental approach from the start. Hook up a shitload of sensors to a massively parallel brain-structure-type thing and have it "learn" from there. We won't so much program it as direct its growth and evolution. This probably requires a ton more computing power than we have now, but seeing as how it on average doubles every 2 years, it will eventually catch up. The first artificial humans will likely be pretty unintelligent, but then they'll quickly surpass humans because we'll have bested evolution - we'll have figured out exactly what makes intelligence and sentience, and then we'll be able to turn that knob up to 11 and beyond in a relatively short period of time.

At that point we can only hope one of them doesn't go rogue and kill us all.

Re:Misleading crap (1)

Anonymous Coward | about 9 months ago | (#44318181)

I've often had thoughts along similar lines: we expect AI systems to be good after a few days or weeks of learning and are surprised they can't match what humans can do after years of learning. There are some projects that do long-term learning like NELL [wikipedia.org] which continually reads text from web pages and uses what it has learned already to better understand the text and therefore learn more from it. Also, maybe it's not quite sensory, but a lot of (most?) AI research these days is in machine learning [wikipedia.org], which is the general field of throwing data at an algorithm that somehow finds patterns in it and then is able to do some task on similar data. I don't think the state of the art in machine vision/speech is good enough for anyone to try to make a never-ending learner working with a robot's sensors or similar.

Note that human brains seem to do a fair amount of pre-processing on our sensory data, so what our brain has to learn from is a lot easier to deal with than what a naive camera or microphone picks up. I saw a cool talk on this a few months ago where they used brain reading technology to detect the signals corresponding to sounds after they had been processed by the brain a bit and were able to use that to eliminate most of the noise from a recording with a voice.

Re:Misleading crap (1)

phantomfive (622387) | about 9 months ago | (#44318049)

Most importantly (and practically), they know to start acting cute before somebody kills them.

Is this true? I agree with most of what you say, but I feel like when people are really angry at kids, the kids just get scared, not cute

Re:Misleading crap (1)

ebno-10db (1459097) | about 9 months ago | (#44319151)

I feel like when people are really angry at kids, the kids just get scared, not cute

The trick is to get cute just before the adults hit that stage. As to whether this approach works, the proof is that my daughter is still alive.

Re:Misleading crap (0)

Anonymous Coward | about 9 months ago | (#44319179)

Crying and trembling and being cute are all mechanisms to gain sympathy and show they aren't a threat. Kind of like when your dog rolls over, showing his vulnerable areas, and whimpers, only more complex.

Re:Misleading crap (1)

Trepidity (597) | about 9 months ago | (#44318749)

Most importantly (and practically), they know to start acting cute before somebody kills them. Let me know when an AI program can do that.

Brb, writing a grant proposal, "KAW-AI-I: Towards technologies for emotionally manipulative artificial agents".

Re:Misleading crap (0)

Anonymous Coward | about 9 months ago | (#44317649)

There is a lot of misleading crap on here. Anything to do with private space or 3D printing, for example.

Re:Misleading crap (0)

Anonymous Coward | about 9 months ago | (#44317833)

So it's still about 2 years ahead of your average Slashdotter then?

Re:Misleading crap (1)

TWiTfan (2887093) | about 9 months ago | (#44317987)

A four-year-old can project and read emotions with surprising sophistication (especially the project part), understand and communicate with spoken language, climb around and use motor skills for pretty complex tasks, and about a million other things that no one AI is even close to being able to do. Even the best AI's still struggle with such tasks *individually*, much less as a whole package. My daughter could understand plenty of things at 4 that would give Siri a fit, and Siri isn't even potty trained!

I think they have it backwards (4, Insightful)

shadowrat (1069614) | about 9 months ago | (#44317325)

They didn't assess how intelligent this AI is. They assessed the IQ test and found it to be a poor indication of intelligence.

Re:I think they have it backwards (0)

Anonymous Coward | about 9 months ago | (#44317495)

They assessed the IQ test and found it to be a poor indication of intelligence.

No, they didn't. Like or hate IQ, it's the most accepted way to give a number to intelligence. If you can't translate something into numbers you can't do any meaningful science with it. IQ may not be perfect, but whenever you measure something in the real world your results are imperfect.

Re:I think they have it backwards (3, Interesting)

bunratty (545641) | about 9 months ago | (#44317581)

It's the most accepted way to give a number to intelligence for a human being. In other words, the score on the IQ test is correlated highly with the intelligence of the person who took the test. There isn't necessarily such a correlation for computer programs. A relatively simple computer program might score highly on an IQ test designed for humans, but that certainly doesn't mean it's as smart as a human. I've taught this idea to AI classes -- it's a question near the beginning of the Russell & Norvig AI book.

Re:I think they have it backwards (-1)

Anonymous Coward | about 9 months ago | (#44320177)

If that is the level of "ideas" to be "taught... to AI classes", it's pretty clear that not enough is known about AI to even bother with a class.

At this point, you're just coming up with classes to justify having a CS major.

Re:I think they have it backwards (0)

Anonymous Coward | about 9 months ago | (#44317787)

They assessed the IQ test and found it to be a poor indication of intelligence.

No, they didn't. Like or hate IQ, it's the most accepted way to give a number to intelligence. If you can't translate something into numbers you can't do any meaningful science with it. IQ may not be perfect, but whenever you measure something in the real world your results are imperfect.

The problem isn't just that most IQ tests sucks balls.
Many tests tend to be focused on pattern recognition (Spatial or numerical) and extrapolation in different ways.
The purely numerical versions are extra silly since that is pretty much an exercise in guessing what extrapolation method the test writer wants you to use this time. If you know more math than the one who write the test that will work against you.
It is also very common that the questions require a specific cultural or historical knowledge.

Pretty much all IQ-tests I have seen ought to be relabeled "Pattern extrapolation skill test for people that passed elementary school in this country."

OOOh IQ thread. (0, Troll)

serviscope_minor (664417) | about 9 months ago | (#44317355)

Let me get started:

* IQ tests don't measure intelligence
* IQ tests only measure a certain *type* of intelligence.
* Your jealous because I have an IQ of -2147483648.
* I'm too smart for IQ tests.
* You're book smart but I'm street/code smart
* random troll at -1

Have I missed anything?

Re:OOOh IQ thread. (1)

slashmydots (2189826) | about 9 months ago | (#44317615)

You missed that IQ should be determined purely by video game abilities. Crack a door lock, solve a puzzle, do some math at an NPC vendor, shoot some people in the head in a logical fashion and tada, 150 IQ.

Re:OOOh IQ thread. (1)

mrbester (200927) | about 9 months ago | (#44317683)

Yes. Vocabulary does not indicate intelligence, merely memory and language exposure and its continued use as an indicator shows how flawed IQ tests are.

Re:OOOh IQ thread. (0)

Anonymous Coward | about 9 months ago | (#44317823)

Every indicator of intelligence measures something other than general intelligence. It is the correlation between them which proves their relation to another quantity, general intelligence.

Don't shout something down if you can't be bothered to get even a basic understanding of how it works.

Re:OOOh IQ thread. (1)

phantomfive (622387) | about 9 months ago | (#44318085)

Yes. Vocabulary does not indicate intelligence, merely memory and language exposure and its continued use as an indicator shows how flawed IQ tests are.

Don't feel too bitter, MENSA rejected me, too.

Dupe (0)

Anonymous Coward | about 9 months ago | (#44317373)

How to this compare to the IQ of the average dupe-posting editor?

Does the IQ scale with Mores law? (1)

Sla$hPot (1189603) | about 9 months ago | (#44317395)

In that case when will it be old enough to by liquor?
And when will it be smart enough to fire up Sky Net?

Re:Does the IQ scale with Mores law? (0)

Anonymous Coward | about 9 months ago | (#44319273)

More importantly, if I install this thing and talk dirty to it, would I be a child molester?

Average IQ (0)

Anonymous Coward | about 9 months ago | (#44317401)

Wouldn't the average IQ of a 4 year old child be 100?

To put this in perspective (0)

Anonymous Coward | about 9 months ago | (#44317405)

The ED-209 unit from the Robocop movie was measured as a 5 year old.

It had a birthday recently (1)

sl4shd0rk (755837) | about 9 months ago | (#44317409)

The link seems to point to ConceptNet 5 now.
If they re-run their IQ test, I think they will gleefully find it is now as smart as a 5yr old.

Nice, meaningless score (1)

GeekWithAKnife (2717871) | about 9 months ago | (#44317513)


While I am excited about advancements in AI it makes one wonder what is the use of such scores beyond some marketing?

IQ is debunked. It's not a true measure of intelligence. If anything it can measure of much a person is willing to invest (time/effort) in scoring well on said test.

Compared to other children the scores vary wildly unlike any normal child.

While it's still an achievement to have a sophisticated program worthy of an "AI" label we are, unfortunately nowhere near true AI.

yoLu FAIL It (-1)

Anonymous Coward | about 9 months ago | (#44317521)

Fu8 to be again. Usenet is roughly SMITH ONLY SERVE FUCKING USELESS

Reason (1)

ArcadeMan (2766669) | about 9 months ago | (#44317647)

But unlike most children, the machine's scores were very uneven across different portions of the test.

That's not a bug, that's the beta version of GPP [wikia.com].

IQ tests only apply to humans (2, Insightful)

Anonymous Coward | about 9 months ago | (#44317723)

These tests don't tell us much about the power of an AI and here is why. If you give a human test with a million questions, then giving one more question is not going to tell you much more. You could probably remove some of the questions too without removing much information about how smart the person is. It turns out some of the questions are much more valuable when it comes to figuring how smart someone is. If you put enough statistics work into that, you'll be able to condense those million questions into a quite short list of questions that can be administered in an hour or so, to a human, yet still tell you almost as much information as the million question test did. That's what an IQ test is.

The problem is, if you give that test to an AI, then the IQ number you get at the end won't tell you how well the AI would have done at a million/billion/trillion question test. You do get that information for a human because the test has been carefully constructed to be like that. For an AI, all you learn is how well the AI does at the questions in the test, which is much less interesting than the information you get from a human taking an IQ test.

Unfortunately (4, Funny)

puddingebola (2036796) | about 9 months ago | (#44317855)

Unfortunately the AI also lied that it had completed its arithmetic assignment so that it could go out to recess early. It is also suspected of taking an extra snack at snack time, and caused a disturbance during nap time.

Not intelligent (1)

Hypotensive (2836435) | about 9 months ago | (#44318089)

Humans are not programmed to be intelligent. Intelligence is just an aspect of how the brain (specifically the cortex) works.

Even very stupid children or other mammals can learn to do things like catch a ball, walk, remember someone's face or voice or that they had spaghetti for dinner. AI is concentrating on trying to duplicate the wrong types of behavior, starting from the wrong end. It doesn't tell us anything useful about humans or intelligence.

We'll never replicate the human experience... (0)

Anonymous Coward | about 9 months ago | (#44318095)

Until we start teaching out AI "consequence" and pain. Our entire existence is based on mitigating the effects of problems. As it stands, engineers have been focusing on data collection. Then they move on to the "why". But they have never tried to instill "fear" in their new creations. People make their best efforts in response to "fear" and until our machines understand that getting something wrong can be detrimental to them (which in turn means they have to understand the ultimate fear on non-existence) they will never achieve the level of intelligence we have.

This actually debunks IQ tests. (0)

Anonymous Coward | about 9 months ago | (#44318477)

It just shows how worthless an IQ test is for testing intelligence. There isn't even anybody home in this software.

Re:This actually debunks IQ tests. (0)

Anonymous Coward | about 9 months ago | (#44318493)

We ought to call these systems, "simulated intelligence," rather than, "artificial intelligence," because intelligence is an analog, not a digital, phenomenon.

Faking It (2)

Capt.Albatross (1301561) | about 9 months ago | (#44318523)

"ConceptNet 4 did dramatically worse than average on comprehension—the ‘why’ questions.” - Robert Sloan, lead author of the study.

This comment strengthens my feeling that current AI is making progress in faking many of the accidental attributes of intelligence, but has not discovered the essence.

The development of childrens' mental abilities seems to accelerate over time, as if there is positive feedback, but this does not seem to have emerged in AI yet, especially if we factor out Moore's law. On the contrary, any given exercise in developing AI through machine learning seems to hit a wall of diminishing returns at some point. Is anyone aware of a project that has not experienced this effect?
 

WHITE 4 year old, or BLACK? (0)

Anonymous Coward | about 9 months ago | (#44318637)

LOL.
Because we all know 'the races are all the same', or 'race is just a social construct', right?
What are the Jews going to do when an artificial intelligence system becomes as intelligent as a human, yet ISN'T a human (obviously), and therefore will tell the truth about race, and the ongoing invasion of every white country on earth, by millions of third world parasites?
Will they pull the plug if people ask it 'awkward' questions, like "Don't white people have the right to have their own countries any more?"

The actual study? How did they do it? (2)

paskie (539112) | about 9 months ago | (#44318843)

Anyone knows where to access technical information about the actual study, or how did they conducted the IQ test? ConceptNet is just a database + a library with some NLP parsing tools and database (the concept hypergraph) accessors, but I wonder how did they actually conducted the test as that doesn't seem to be a trivial extension of the available tools...

Load More Comments
Slashdot Account

Need an Account?

Forgot your password?

Don't worry, we never post anything without your permission.

Submission Text Formatting Tips

We support a small subset of HTML, namely these tags:

  • b
  • i
  • p
  • br
  • a
  • ol
  • ul
  • li
  • dl
  • dt
  • dd
  • em
  • strong
  • tt
  • blockquote
  • div
  • quote
  • ecode

"ecode" can be used for code snippets, for example:

<ecode>    while(1) { do_something(); } </ecode>
Sign up for Slashdot Newsletters
Create a Slashdot Account

Loading...