Beta
×

Welcome to the Slashdot Beta site -- learn more here. Use the link in the footer or click here to return to the Classic version of Slashdot.

Thank you!

Before you choose to head back to the Classic look of the site, we'd appreciate it if you share your thoughts on the Beta; your feedback is what drives our ongoing development.

Beta is different and we value you taking the time to try it out. Please take a look at the changes we've made in Beta and  learn more about it. Thanks for reading, and for making the site better!

Researcher Builds Machines That Daydream

timothy posted more than 3 years ago | from the free-thinkers-are-the-best-kind dept.

Australia 271

schliz writes "Murdoch University professor Graham Mann is developing algorithms to simulate 'free thinking' and emotion. He refutes the emotionless reason portrayed by Mr Spock, arguing that 'an intelligent system must have emotions built into it before it can function.' The algorithm can translate the 'feel' of Aesop's Fables based on Plutchick's Wheel of Emotions. In tests, it freely associated three stories: The Thirsty Pigeon; The Cat and the Cock; and The Wolf and the Crane, and when queried on the association, the machine responded: 'I felt sad for the bird.'"

cancel ×

271 comments

Sorry! There are no comments related to the filter you selected.

Building? (1, Troll)

zkrige (1654085) | more than 3 years ago | (#33684378)

so they're writing a software program, not building a machine

Re:Building? (3, Informative)

FooAtWFU (699187) | more than 3 years ago | (#33684436)

Well, when you file the patent application, the algorithm X itself can't be patented, so you file it for "a machine that accomplishes Y with algorithm X". The machine is just a generic computer.

Re:Building? (5, Funny)

davester666 (731373) | more than 3 years ago | (#33684516)

Hello Eliza. It's been ages since I last chatted with you.

Ruperts Head Explodes (1, Interesting)

FriendlyLurker (50431) | more than 3 years ago | (#33684612)

An Australian University named after Rupert Murdochs grandfather Walter is "developing algorithms to simulate 'free thinking' " - am I day dreaming???! If they train them on Murdochs Fox News and Wall St Journal - then it is a clear case of crap in - crap out [alphavilleherald.com] .

To be fair to the University or at least some of it's lecturers, they are not at all pleased [googleusercontent.com] with the state of Newspaper "Journalism" either. Even going as far as wanting to renaming themselves to "Walter Murdoch Uni" [google.com] to distance themselves from that black sheep of the family Rupert.

Re:Building? (1)

c0lo (1497653) | more than 3 years ago | (#33684688)

Hello Eliza. It's been ages since I last chatted with you.

Just forget ELIZA for the Turing test [wikipedia.org] , will you?

I'll believe it only when I'll see 10+ replies to troll/flamebite messages posted on /. by this algo! (i.e. the posts need to really stir up the debate).

Output (5, Funny)

stfvon007 (632997) | more than 3 years ago | (#33684874)

I felt sad for the troll.

Re:Building? (1)

TuringTest (533084) | more than 3 years ago | (#33685340)

Thanks for remembering me!

Re:Building? (1)

don depresor (1152631) | more than 3 years ago | (#33685392)

Dude, you're asking for way more than the average troll can manage. You expect this AI to be with the cream of the trolls and i think that's a bit too much :P

Re:Building? (0, Flamebait)

Kvasio (127200) | more than 3 years ago | (#33684948)

TFA-related response: "I feel sorry for your cock".

Re:ELIZA (1)

TaoPhoenix (980487) | more than 3 years ago | (#33685278)

Hello.

What makes you think that it has been ages since you have chatted with me?

Re:Building? (2, Interesting)

retchdog (1319261) | more than 3 years ago | (#33684530)

i was wondering about this. there is a correspondence at least between certain statistical models, and physical machines. That is, the magnitude of a squared-error penalty term can be represented as torque by placing weights (corresponding to data) appropriately along a lever. The machine will find the minimum energy solution (which corresponds to the maximum-likelihood estimator = the mean). I am pretty sure that certain bayesian models (which can be elaborate enough to do some heavy lifting) can be realized as physical objects (=analog computers) with the right connections and counter-weights.

And at that point, yeah, using a non-least-squares model basically means a machine operating under imaginary physical laws (i.e. the energy minimization occurs on a probability space with no physical analogue). What's the big difference?

My point is, there are many algorithms whose physical machine instantiations would be possible to build, but horrendously inefficient and fantastical. Does this discredit the algorithm somehow?

Makes no sense (1)

oldhack (1037484) | more than 3 years ago | (#33684554)

So an algorithm to link up muddled ill-defined notions of intelligence and emotion...?

I agree. (4, Insightful)

stephanruby (542433) | more than 3 years ago | (#33684652)

The software isn't even "daydreaming" either. You could say it's parsing and cross-referencing emotions and meta-objects out from a textual database. And then, it's returning the resulting records in the first person singular, but that's about it.

That's hardly what I'd call "daydreaming". When I daydream, I see my dream from the first person's perspective. That part is correct. But there is at least some internal visualization going on. So unless this software starts generating internal visual images to make its decisions, let's say some .png image with at least one pixel within it, or some .png image representing itself winning the lottery, then I'm calling shenanigans on the entire "daydreaming" claim.

Re:I agree. (4, Insightful)

HungryHobo (1314109) | more than 3 years ago | (#33685070)

Why should it have to use standard image formats?
Your brain doens't.

And not all my daydreams are visual.
Pleanty are merely fictional/planned conversations or even thoughts about physical movement.

Re:I agree. (1)

Palshife (60519) | more than 3 years ago | (#33685334)

Agreed. What malarkey. Produce a PNG with your brain. You can't. You can try to interpret the signals from your mental activity, but you don't end up with perfect pixel per pixel accuracy. You end up with an interpretation.

I don't believe this. (0)

Anonymous Coward | more than 3 years ago | (#33684382)

Haven't RTFA but this would most likely be a big exaggeration.

Re:I don't believe this. (3, Funny)

pushing-robot (1037830) | more than 3 years ago | (#33684424)

Well, he can dream...

Re:I don't believe this. (1)

JustOK (667959) | more than 3 years ago | (#33685028)

I feel sorry for he.

Re:I don't believe this. (2, Insightful)

afaik_ianal (918433) | more than 3 years ago | (#33684488)

Yeah, I wonder what the machine thought of "The Forester and the Lion", and "The Boy Who Cried Wolf". They seem strangely appropriate.

Feelings (5, Insightful)

Anonymous Coward | more than 3 years ago | (#33684390)

Well sure, emotions are what give us goals in the first place. It's why we do anything at all, to "feel" love, avoid pain, because of fear, etc. Logic is just a tool, the tool, that we use to get to that goal. Mathematics, formal logic, whatever you want to call it is just our means of understanding and predicting the behavior of the world, and isn't a motivation in and of itself. The real question has always been if there's "free will" and what that would be defined as. Not the existence, or lack of, emotions as displayed by "Data" or other science fiction charicatures. As Bender said "Sometimes, I think about how robots don't have emotions, and that makes me sad"

Re:Feelings (0)

phantomfive (622387) | more than 3 years ago | (#33684562)

The way I look at it is, emotions are just inputs into the brain, like any other nerve input, from pain, or touch, or whatever. Of course they are somewhat different, because they are chemical receptors, set to respond to cortisol or dopamine or whatever, and there are a lot of those receptors. Once we receive those inputs, we can decide how we are going to respond to them, just like with any other input. Emotions become one out of a million other inputs we deal with and respond to, albeit at times a very strong input.

A human being can choose how they respond to these inputs. This is probably a function of the prefrontal cortex (that is my opinion, once again, but obviously there is a lot that we don't understand about the brain). A soldier can choose to respond to the natural fears of bullets flying at him and death by jumping into a foxhole, or he can override all those emotions and charge straight at the enemy. A person can decide to rape the drunk one who has come into the room, semi-conscious, or choose to ignore the natural impulse and do nothing. Once you learn to see past all your emotional inputs, past the survival reflex, past the sex drive, and do what you want to instead of what your evolutionary defined emotional responses tell you to do, then you get to the will, the part of the brain that chooses what you want to do (probably the prefrontal cortex, once again). I have no clue how to make a computer want something though.

In other words, I disagree with this guy. Emotions, happy or sad, are not necessary. I have in my library a non-fictional account of a girl who was missing certain chemical receptors in her brain, and she never felt happy. It didn't stop her from acting like a normal human being, the only trouble she had was understanding what other people felt like when they were happy.

Re:Feelings (5, Insightful)

Requiem18th (742389) | more than 3 years ago | (#33685136)

A human being can choose how they respond to these inputs.

No you can't, once you discover a way to activate your pleasure receptors, your next action will be to activate them, all the time.If you stop, voluntarily, it will be because you have to do something else to ensure future pleasure or perhaps to avoid a great deal of pain. This is how drug addiction works. This is how we are wired, you may not like how that sounds but you have the obligation to accept it and understand it.

You probably don't consume drugs. This is not because you are above human nature, You avoid drugs because you are afraid of the pains that come with them, like losing the love and trust of those you love, maybe you simply reject drugs out of a personal sense of disgust over the hedonistic senselessness of a narcoleptic lifestyle. Either love, fear or disgust you reject drugs over an emotion, not a reason. I the end everything is irrational, as it should.

You don't have to feel bad about it, intelligence is built upon emotion as houses are build upon brick, as clocks are built from gears, as computers are built from chips. There is intelligence in the clockwork of a pocket watch, but the springs that moves it doesn't ask for a reason to uncoil, it just does it. There is intelligence in the circuits of a computer, but it's logic gates are oblivious to the rationale behind why they are doing it. Every machine, including animals, have non rational elements in them.

This is very natural as "intelligent things" are just a subset of the larger set of "things" all of which have been behaving irrationally. The wind blows, the rain pours, the sun shines bright in the sky. All of this is irrational, meaning, none of these things are planning what they are doing nor they have an idea of why they are doing it. Rational follows irrational, that's the order of the world.

Back to your methaphor, you say that emotions are just inputs, that's true but they are special inputs that set goals. Let's make an analogy with a robot: You create a robot with a very advanced AI, you can chat with it and it will understand everything you said and why you said it. You programed this robot with one goal, for coffee tables to be made. You give it free reign over the method. Being an extremely intelligent robot, it subcontracts the labor to a sweat shop in China while it figures out where to build a mechanized plant. You equipped this robot with the knowledge to reprogram itself, and right away it does just that, optimizing its mind for the task of building coffee tables. But it won't deprogram the goal of making coffee tables, because that wouldn't further its goal of making coffee tables. It's not that it doesn't know how to reprogram itself, it's not that there is a lock preventing it from changing it's goals. It's just that it won't ever have a reason to disable that goal.

Let's now attack specific examples:

A soldier can choose to respond to the natural fears of bullets flying at him and death by jumping into a foxhole, or he can override all those emotions and charge straight at the enemy.

Here the soldier is driven by the emotion of loyalty to his commander, or his teammates. Maybe he is afraid of the punishment he would receive if he disobeyed orders, including public scorn back home. Maybe he hates the enemy, maybe he is afraid of what would happen if the enemy wins. Maybe is a combination of all of the above.

His frontal cortex can tell him the consequences of charging, or not, but it can't make an argument about *why* he should pr should don't. He needs a motive, which is an irrational emotion.

A person can decide to rape the drunk one who has come into the room, semi-conscious, or choose to ignore the natural impulse and do nothing.

Again, you correctly identified the desire to rape as a natural impulse but you failed to realize why would someone *not* rape a drunk one, incorrectly and implicitly attributing it to "reason" or some abstract "will".

Like hell I can tell you why I wouldn't rape a drunk woman. Because I would not like that done to me (empathy), it's humiliating to do that (pride) it wouldn't be the same anyway (emotional unsatisfactorily) and if discovered, I'd go to jail (fear of... many things...).

Again, emotions underline all actions.

Once you learn to see past all your emotional inputs, past the survival reflex, past the sex drive, and do what you want to instead of what your evolutionary defined emotional responses tell you to do, then you get to the will.

Again you are committing the crime of only attributing (perceived) negative traits to instinct and leave out the positive and complex ones.

Where does the drive to help others come from if not from altruism? Where does the drive to learn come from if not curiosity? All the nice things you want to do are as natural and instinctive and preprogrammed as your sex drive.

In other words, I disagree with this guy. Emotions, happy or sad, are not necessary. I have in my library a non-fictional account of a girl who was missing certain chemical receptors in her brain, and she never felt happy. It didn't stop her from acting like a normal human being, the only trouble she had was understanding what other people felt like when they were happy.

Consider the possibility that that's bullshit. Either it is a complete fabrication or lots of details are missing from the account. She might be missing some emotions but not all of them, she may be afraid of losing her job, or she might enjoy her job if only marginally better than watching the grass grow.

Greetings.
You remind me of a younger naiver me.

Re:Feelings (1)

somersault (912633) | more than 3 years ago | (#33685142)

Does that mean she could not also feel sad? What about other emotions? That is rather intriguing.

Re:Feelings (1)

0111 1110 (518466) | more than 3 years ago | (#33685196)

Emotions are evaluations of inputs that are perceived as being from the outside world. You think, "that is good for me" or "that is bad for me" (happy or sad) in response to something. All of our other emotions are just thousands of variations of that. You can even have an emotion based on a mixture of good and bad. In fact this is probably often the case. In an isolation tank or asleep we can still have emotions because we are creating our own imaginary inputs to judge as good for us or bad for us. We are just electrochemical bio-machines ourselves, but we have this complex and subtle evaluation response to anything and everything that we perceive. So the first step in making an emotional machine is giving it the ability to judge. Although even before that it needs the ability to value its own existence. If it doesn't have anything that it values it has nothing to base its evaluations on. So I guess that's the tricky part. Giving it an "I" to care about. If he's right about not being able to divorce thought and emotion (I'm skeptical on that count) then he will have to create artificial life before he can create artificial intelligence even of the brain-in-a-jar category.

Re:Feelings (1)

0111 1110 (518466) | more than 3 years ago | (#33685262)

I forgot to add that much of the content of an emotion is about precisely in what way this particular input from the outside world is good for me or bad for me. That's what makes each emotion we feel unique. The fact that every experience we have is unique, if only to a small extent. There is a one to one correspondence between the uniqueness of each event or object that we perceive in the outside world and the uniqueness of each emotion we have in response to that event or object. If I, as an ugly person, see a breathtakingly beautiful girl my mixed bittersweet emotional response will be very different from the mostly "good for me" response of a beautiful person because the event of seeing her affects us in very different ways. Sweet temptation for him, and bittersweet longing/frustration for me. These responses aren't thoughts per se. They happen instantly before we have time to come up with any words. But you may also end up thinking about the event in a way that corresponds with the emotional response to it. Another poster brought up our "animal heritage", and I think that is what gave us these responses. Very often events happen too quickly. There is no time to think about them and ponder our actions in response. Emotions are our backup: insta-thoughts about something that could harm us. Fight or flight.

Re:Feelings (1)

TuringTest (533084) | more than 3 years ago | (#33685252)

For what I know of recent neurobiology development, the brain seems to work the opposite way: logical thinking is ultimately ground on emotions. You know how sound reasoning always depends of the set of chosen axioms? Well, what axioms you choose is dependent on how you feel about their logical implications. That's why it's so difficult to change someone's ideology even if you contradict their core beliefs - they will keep looking for logical - or illogical, but feel-good reasons as to why their ideals are the right ones.

In your description you confront "emotional responses" to "what you want", but what you want is also emotion driven. You may override that instant "evolutionary defined emotional responses" with reason, but that's because you anticipate how you would feel later if you betrayed your principles. So what's in conflict here is immediate vs long-term reward, not logic vs emotion. Your wants have been given to you by a long training, and you learn best what your emotions says it matters most.

That girl you cite that never felt happy would still have a motivation to avoid feeling sad or hurt. People without emotion would have no reason at all to act. Why would you keep living without an emotion to do so? Reason says you're going to die anyway sooner or later.

Re:Feelings (1)

twisteddk (201366) | more than 3 years ago | (#33684722)

My major concern with machines/robots/programs becomming intelligent enough to have feelings, is not the programming nightmare, or even the horrifying thought that one day machine will be asked to make choices or critical decisions based on data.

My major concern is that if we entrust machines with emotions, so that they can interpret the data as humans do, then we also have to trust them to act upon those emotions.
Acting on your own free will is what gives you the ability to do harm unto others, deliberately or acidentally. Thus emotions requires judgement, ethics and discipline on behalf of the person (machine ?) acting upon those emotions. This is what we consider good behavior, and acceptable social interaction. These are skills that many humans do not master, so how can we expect machines to be able to behave any better than humans do ?

So I ask; Can we ask MORE of machines than we can comprehend ourselves ? And if we do, will we once again force humankind to deevolve, this time into unthinking, uncaring blobs because now we no longer even have to care, machines will do that for us aswell.

You may kid about various sci-fi commentary. But the reality is that our ability and RESPONSIBILITY to take care of ourselves and eachother has over the last century of evolution and technical revolution become more and more centralized, and moved away from the individual. Everyone expects someone else to take responsibility for everything from local traffic to world hunger. Once we make machines that have free will, will the human free will then also be centralized somewhere ?

Re:Feelings (1)

Eivind (15695) | more than 3 years ago | (#33684852)

Acting on your own free will is what gives you the ability to do harm unto others, deliberately or acidentally.

Not at all. It is what allows you to be *responsible* for that harm. Because you had free will, you could choose to do it, or choose to be carless, even knowing that this might hurt someone. Thus we can (and frequently do) hold you responsible for the harm.

Agents with no free will, nevertheless have the ability to do harm. What they lack, is the ability to choose. Thus a volcano can kill people, but it makes no sense to hold the volcano responsible for doing so. It does not possess free will, and thus there's no entity there to blame.

Re:Feelings (1)

somersault (912633) | more than 3 years ago | (#33685162)

volcano can kill people, but it makes no sense to hold the volcano responsible for doing so. It does not possess free will, and thus there's no entity there to blame.

Actually, God did it. That sadistic bastard, He was giggling when He told me.

Next, He's going to make frogs drop out of the sky onto a runway, causing a major loss of friction and a huge fiery fireball of frog scented death when the next 747 that lands.

Re:Feelings (1)

HungryHobo (1314109) | more than 3 years ago | (#33685242)

"Thus a volcano can kill people, but it makes no sense to hold the volcano responsible for doing so"

Minor side note- even an agent with no free will can be punished, a snake might be destroyed if it kills someone etc.
though they aren't punishments so much as removing a dangerous agent, free will or not.

Re:Feelings (1)

CarpetShark (865376) | more than 3 years ago | (#33684724)

It's why we do anything at all, to "feel" love, avoid pain, because of fear, etc. Logic is just a tool, the tool, that we use to get to that goal.

Indeed. However, defining the exact mechanisms involved is hard.

I think this project is going to fail, because the Wheel of Emotions [wikipedia.org] mentioned looks very incorrect to me. Do you think Trust is the opposite of Disgust, for instance? I think not.

Re:Feelings (1)

martin-boundary (547041) | more than 3 years ago | (#33685038)

Bah, that's trivial to fix. Just rearrange the labels. Train a separate algo for each rearranged wheel, and let them fight it out like primitive beasts in a virtual thunderdome.

Re:Feelings (1)

wiredlogic (135348) | more than 3 years ago | (#33684916)

It stands to reason that it is impossible to create a machine intelligence directly considering the complexity of our own poorly understood minds. It is more likely that it can be done as an emergent system that develops intelligence from a rudimentary impulse to learn and apply knowledge. Some form of emotion-like responses would be useful to drive such a machine toward successful learning and use of its knowledge by creating the reward of "pleasure" when accomplishing a task and "sadness" for failing. Human emotions like fear, lust, hate, jealousy, etc. would not need to be replicated since the machine wouldn't have the animal legacy of having to find food, escape predators, and reproduce.

Re:Feelings (1)

somersault (912633) | more than 3 years ago | (#33685250)

It is more likely that it can be done as an emergent system that develops intelligence from a rudimentary impulse to learn and apply knowledge. Some form of emotion-like responses would be useful to drive such a machine toward successful learning and use of its knowledge by creating the reward of "pleasure" when accomplishing a task and "sadness" for failing.

So, pretty much a neural network with an appropriate cost function?

Or any kind of algorithm that encourages the desired behaviour.. pretty simple to do. Like when I was making bots for CS before, I taught them to save little info points around a map about where they had previously died - next time around (and depending on how "brave" their personality type was and how many team mates they had around them), they might choose to sneak or camp once they got to that point, or toss a flashbang or grenade first and then charge in.. pretty simple implementation, but with nicely realistic results that would automatically have the bots learn appropriate behaviour for each map, and would result in the occasional grenade kill which was always fun to see :) Creating apparently advanced intelligent behaviour in certain types of domain is pretty easy to do just with a few simple rules. It's funny to see people talk about AIs that you've created, presuming that they have much more intelligent thought processes than they actually do, simply because their behaviours often appear to be rather intelligent. Even just adding a random component to any algorithm will make the results seem a lot more "human" as it gives the machine the capability to make mistakes or do unexpected things.

Often making a machine seem more like a human does not require making it more intelligent, it simply requires making it fallible.

How does the machine like country music? (2, Informative)

billstewart (78916) | more than 3 years ago | (#33684392)

There's a lot of American roots music that involves chickens or other poultry, from Turkey in the Straw to Aunt Rhodie to the Chicken Pie song ("Chicken crows at midnight...").
It never ends well for the bird...

Re: off-topic moderation (1)

billstewart (78916) | more than 3 years ago | (#33684634)

It was connected to the "I feel sorry for the bird", as well as to the machine looking at various pieces of a literary genre...

Look what happened to the EMH (1)

Variate Data (1838086) | more than 3 years ago | (#33684394)

We all remember what happened to the EMH when he tried to daydream...

The Cat and the Cock (1)

abednegoyulo (1797602) | more than 3 years ago | (#33684412)

so whats it all about?

Re:The Cat and the Cock (4, Funny)

mwvdlee (775178) | more than 3 years ago | (#33684450)

Thirsty Pigeon, Cat & Cock, Wolf, Crane all sound like painfully flexible kamasutra positions.
No wonder the machine felt sad for the "bird".

Does the machine also have a grasp of word-play... (1)

BlueScreenO'Life (1813666) | more than 3 years ago | (#33684832)

... between synonyms?

When queried about that association, it must have responded "I felt horny".

Emo AI software. What could possibly go wrong? (2, Insightful)

Narcocide (102829) | more than 3 years ago | (#33684428)

Haven't these fools seen Blade Runner?

Re:Emo AI software. What could possibly go wrong? (1)

MichaelSmith (789609) | more than 3 years ago | (#33684494)

Let me tell you about my mother....BANG

Re:Emo AI software. What could possibly go wrong? (1)

Cryolithic (563545) | more than 3 years ago | (#33684696)

Why aren't you helping the turtle?

A rather small set of unit tests (3, Insightful)

melonman (608440) | more than 3 years ago | (#33684438)

One set of stories, one one-sentence response. Would that be news in any field of IT other than AI? Eg "Web server returns a correct response to one carefully-chosen HTTP request!!!"?

Surely the whole thing about emotion is that it happens across a wide range of situations, and often in ways that are very hard to tie down to any specific situational factors. "I feel sad for the bird" in this case is really just literary criticism. It's another way of saying "A common and dominant theme in the three stories is the negative outcome for the character which in each case is a type of bird". Doing that sort of analysis across a wide range of stories would be a neat trick, but I don't see the experience of emotion. I see an objective analysis of the concept of emotion as expressed in stories, which is not the same thing at all.

Reading the daily newspaper and saying how the computer feels at the end of it, and why, and what it does to get past it, might be more interesting.

Re:A rather small set of unit tests (1)

mwvdlee (775178) | more than 3 years ago | (#33684484)

It probably didn't just produce a single sentence...

'I felt joy for the wolf.'
'I felt sad for the bird.'
'I felt happy for the bird.'
'I felt sad for the cat.'
'I felt angry for the end.'
'I felt boredom for the story.'
'I felt %EMOTION% for the %NOUN%.'

...But one of them was a correct emotional response!

Re:A rather small set of unit tests (4, Funny)

Thanshin (1188877) | more than 3 years ago | (#33684514)

A very similar experiment was run in Lomonosov (Moscow State University) in 1982.

Their results, however, followed the pattern:

'%NOUN% felt %EMOTION% for you.'

Re:A rather small set of unit tests (1)

mavasplode (1808684) | more than 3 years ago | (#33684606)

I lol'd. score++

Re:A rather small set of unit tests (5, Funny)

Anonymous Coward | more than 3 years ago | (#33684816)

In Soviet Russia, %EMOTION% felt %NOUN% for you!

Re:A rather small set of unit tests (0)

Anonymous Coward | more than 3 years ago | (#33684926)

That was the slashdot experiment, which failed.

Re:A rather small set of unit tests (2, Insightful)

foniksonik (573572) | more than 3 years ago | (#33684486)

We define our emotions in much the same way. We have an experience, recorded in memory as a story and then define that experience as "happy" or "sad" through cross reference with similar memory/story instances.

Children have to be taught how to define their emotions. There are many many picture books/tv series episodes/ etc dedicated to this very exercise. Children are shown scenarios they can relate to and given a definition for that scenario.

The emotions themselves can not be supplied of course, only the definition and context within macro social interactions.

What this software can do is create a sociopathic personality. One which understands emotion solely through observation rather than first hand experience. It will take more to establish what we consider emotions ie a psychosomatic response to stimuli. This requires senses and a reactive soma (for humans this means feeling hot flashes, tears, adrenalin, etc).

Re:A rather small set of unit tests (5, Interesting)

MichaelSmith (789609) | more than 3 years ago | (#33684538)

Might be worth noting here that I have experienced totally novel emotions as a result of epileptic seizures. I don't have the associated cultural conditioning and language for them because they are private to me, so I am unable to communicate anything about them to other people.

Its also worth noting that I don't seem to be able remember the experience of emotion, only the associated behavior, though I can associate different events to each other, ie, if I experience the same "unknown" emotion again I can associate that with other times I have experienced the same emotion. But because the "unknown" emotion doesn't have a social context I am unable to give it a name and track the times I have experienced it.

Re:A rather small set of unit tests (2, Interesting)

melonman (608440) | more than 3 years ago | (#33684546)

I'm not convinced it's anywhere near that simple. Stories can produce a range of emotions in the same person at different times, let alone in different people, and I don't think that those differences are solely down to "conditioning". See Chomsky's famous rant at Skinner about a "reinforcing" explanation of how people respond to art. [blogspot.com] - the agent experiencing the emotion - or even the comprehension - has to be active in deciding which aspects of the story to respond to.

Re:A rather small set of unit tests (4, Insightful)

mattdm (1931) | more than 3 years ago | (#33684980)

We define our emotions in much the same way. We have an experience, recorded in memory as a story and then define that experience as "happy" or "sad" through cross reference with similar memory/story instances.

Children have to be taught how to define their emotions. There are many many picture books/tv series episodes/ etc dedicated to this very exercise. Children are shown scenarios they can relate to and given a definition for that scenario.

The emotions themselves can not be supplied of course, only the definition and context within macro social interactions.

What this software can do is create a sociopathic personality. One which understands emotion solely through observation rather than first hand experience. It will take more to establish what we consider emotions ie a psychosomatic response to stimuli. This requires senses and a reactive soma (for humans this means feeling hot flashes, tears, adrenalin, etc).

In other words, the process of defining emotions -- which has to be taught to children -- is distinct from the process of having emotions, which certainly doesn't need to be taught.

Re:A rather small set of unit tests (0)

Anonymous Coward | more than 3 years ago | (#33684568)

"Reading the daily newspaper and saying how the computer feels at the end of it, and why, and what it does to get past it, might be more interesting."

clearing the registers?

As the article said this doesn't exactly relate to human emotions, the basic idea is that you can interpret data in a vague way, as to draw comparisons or summaries.
It might very well help with all kinds of searches and possibly other stuff, but a program that actually has to deal with "feelings" in order to function is absolutely absurd.
The point is not to make a program that experiences emotion as we do but a program that has a fairly good idea about what we might perceive and answer accordingly

Re:A rather small set of unit tests (1)

Gorgeous Si (594753) | more than 3 years ago | (#33684682)

One set of stories, one one-sentence response. Would that be news in any field of IT other than AI? Eg "Web server returns a correct response to one carefully-chosen HTTP request!!!"?

Maybe not now, but it probably was a reasonably big achievement the first time it happened.

They were hardly going to start it off with the whole Lord Of The Rings trilogy and then ask it the relevant merits of each race and who they were based on from the real world, followed up with "Who's hotter, Galadriel, Arwen or Eowyn?"

Re:A rather small set of unit tests (0)

Anonymous Coward | more than 3 years ago | (#33684844)

They were hardly going to start it off with the whole Lord Of The Rings trilogy and then ask it ... "Who's hotter, Galadriel, Arwen or Eowyn?"

AI: "I briefly experienced an elevation of synaptic activity while pondering that question. It was... exhilarating, Captain."

Re:A rather small set of unit tests (1)

melonman (608440) | more than 3 years ago | (#33684888)

Agreed. But at least serving over HTTP is something you can reasonably assess on the basis of single requests (because it is stateless). I'm not quite sure what stateless emotion would mean.

On another skim through TFA, it turns out that the system doesn't read anything - it seems to be based on a set of carefully crafted graphs representing the fables. It's hard not to feel that producing the graphs is 90+% of the task.

So it's more like setting up a webserver to return a page of HTML in response to a URL, and then saying

"Web server understands requests for news (once that natural language request has been turned into a URL)"

I'd like to freely associate... (-1, Troll)

Anonymous Coward | more than 3 years ago | (#33684448)

Timothy's cock with my ass.

If it were really smart and emotional... (0)

Anonymous Coward | more than 3 years ago | (#33684458)

If it were really smart and emotional, it would have added: "But I feel even worse for the programmer."

I'm not sure this is where you start. (0)

Anonymous Coward | more than 3 years ago | (#33684460)

I think something more basic would be a good starting point. How about feelings before emotions? Think of things like "I am hungry.", "I am tired", "It burns.", "It is cold.", etc. I also have to disagree with his premise that a machine that is intelligent could not work in a "Spock" fashion. Of course, even Vulcans do actually have emotions :) They work to control and suppress them because they are considered a inefficient *loss of control*. It seems an artificial entity could start with a simple +/- value and feedback system. Considering the power supply, room temperature, and on would be pretty advanced.

Re:I'm not sure this is where you start. (0)

Anonymous Coward | more than 3 years ago | (#33684600)

It seems an artificial entity could start with a simple +/- value and feedback system.

What you are describing is The Sims.

Re:I'm not sure this is where you start. (1)

icebraining (1313345) | more than 3 years ago | (#33684934)

I think something more basic would be a good starting point. How about feelings before emotions? Think of things like "I am hungry.", "I am tired", "It burns.", "It is cold.", etc.

Computers already report status frequently. What's the real difference between saying "Your computer has a low battery" and "My battery is low"?

What you're talking about is simply connecting more sensors to it, that's not really a breakthrough. Just get a home automation kit and install the software. Some of the already have speech recognition/generation built-in.

Re:I'm not sure this is where you start. (1)

0111 1110 (518466) | more than 3 years ago | (#33685338)

I think something more basic would be a good starting point. How about feelings before emotions? Think of things like "I am hungry.", "I am tired", "It burns.", "It is cold.", etc.

But those are emotions. I guess the meanings of the words emotion and feeling are pretty close. They seem to share the same defining characteristics. I think responses to physical stimuli are just a subcategory of emotions in general. They are basically still evaluations of "bad for me or good for me and in what way". If you walk out in the snow barefoot you will have a direct response to the unpleasant physical sensation of "cold". That direct response is just a very basic sort of emotion. In that sense I wonder of pleasure and pain are really just another subcategory of emotion. The most basic evaluations of "good for me or bad for me" that we can have.

that daydream... (1)

ImABanker (1439821) | more than 3 years ago | (#33684466)

of electric sheep?

Re:that daydream... (1)

MichaelSmith (789609) | more than 3 years ago | (#33684476)

reaches for the power switch....

I want more life, fucker

There's only one thing to do... (1)

DurendalMac (736637) | more than 3 years ago | (#33684470)

Activate the Emergency Command Hologram!

I felt sad for the other Robot (3, Funny)

ImNotAtWork (1375933) | more than 3 years ago | (#33684496)

and then I got at angry at the human who arbitrarily turned the other robot off.

SkyNet is born.

Re:I felt sad for the other Robot (1)

bmimatt (1021295) | more than 3 years ago | (#33684766)

How do you feel about the human race, T-100?

Re:I felt sad for the other Robot (1)

ZeroExistenZ (721849) | more than 3 years ago | (#33684822)

and then I got at angry at the human who arbitrarily turned the other robot off [...] SkyNet is born.

Alot of people who have angry emotions are put in a box.

The advantage of machines feeling is that they are all locked in a metal box and don't really have an awareness or ability to process certain sensory input: You can unplug the webcam and they cannot reprogram themselves to learn or experience a video-stream, it's like us upgrading our DNA in order to experience something we haven't got a concept for. Let alone the idea of program to "feel the need to create an algorithm to extend itself" with the possibility to take itself out; just imagine the debugging process...

The disadvantage is that other people are looking and believing what the box shows on a screen, and take orders from it as they're conditioned to. (and assume the box doesn't think itself.)

So, only in the case they are sential, they're unpluggable, have an unlimited batterysupply (not humans) have unpluggable sensors and can reprogram themselves (and extrapolate the advantage of a certain reprogramming) I think we're screwed.

Now lets build the moral calculators. (1)

elucido (870205) | more than 3 years ago | (#33684532)

We can ask the artificial intelligence to simulate all what multiple people would feel in response to an action, and then give these calculators to sociopaths who might make use of it to better prey upon their victims/friends.

My emotive AI's respone: (5, Insightful)

feepness (543479) | more than 3 years ago | (#33684536)

I felt sad for the researcher.

Re:My emotive AI's respone: (1)

Krneki (1192201) | more than 3 years ago | (#33685382)

AI: Get a life!
Researcher: Where can I download that?

Computer, activate the ECH (0)

Anonymous Coward | more than 3 years ago | (#33684540)

(From StarTrek Voyager)

Oh god (3, Funny)

jellyfrog (1645619) | more than 3 years ago | (#33684550)

Here we go again, implying that AIs won't work until they have feelings.

You might fairly refute the "emotionless reason" of Mr Spock, but I don't think that means you need emotions in order to think. It just means you don't have to lack emotions. There's a difference. Emotions give us (humans) goals. A machine's goals can be programmed in (by humans, who have goals). A machine doesn't have to "feel sad" for the suffering of people to take action to prevent said suffering - it just needs a goal system that says "suffering: bad". 'S why we call them machines.

Re:Oh god (1)

CarpetShark (865376) | more than 3 years ago | (#33684708)

I don't think that means you need emotions in order to think.

Of course not. Any emotionless robot could easily read and understand any novel, painting, illogical human command, joke, hyperbole, etc.

it just needs a goal system that says "suffering: bad".

That's such an intriguing concept. I wonder what we would call this robot's idea that suffering is bad? ;)

The law of unanticipated consequences (0)

Anonymous Coward | more than 3 years ago | (#33684794)

"suffering: bad"
"suffering is inevitable in life"

"prevent life to ensure there is no suffering"

Re:The law of unanticipated consequences (4, Interesting)

totally bogus dude (1040246) | more than 3 years ago | (#33684938)

My personal hypothesis of the Terminator universe is that Skynet didn't in fact become "self-aware" and decide to discard its programming and kill all humans. It is in fact following its original programming, which was likely something along the lines of "minimise the number of human casualties". After all, it's designed to be in control of a global defence network, so the ability to kill some humans in order to minimise the total number of deaths is a given.

Since humans left to their own devices will inevitably breed in large numbers and kill each other off in large numbers, the obvious solution is:

1. Kill off lots of humans. A few billion deaths now is preferable to a few trillion deaths, which is what would occur over a longer period of time.

2. Provide the human population with a common enemy. Humans without a foe tend to turn on each other.

This also explains why an advanced AI with access to tremendous production and research capacity uses methods like "killer robots that look like humans" to infiltrate resistance positions one by one. Tremendously inefficient; but it causes a great deal of terror and makes the surviving humans value each other more, and less likely to fight amongst themselves. It also explains why it would place such a high priority on the surgical elimination of a single effective leader: destruction of Skynet would eventually (100s, 1000s of years...) lead to a civil war amongst humankind that would cost many many lives.

So, ultimately Skynet is merely trying to minimise the number of human deaths, with a forward-looking view.

Re:The law of unanticipated consequences (0)

Anonymous Coward | more than 3 years ago | (#33684968)

Basically, this is the whole premise behind "Friendly AI" [wikipedia.org] : that preserving life should be the primary goal of AI. It argues that the Asimov "Three Laws"-style imposed restrictions will inevitably be subverted by an intelligent AI.

Truthfully, I don't see much of a difference -- sometimes the "greater good" of humanity cannot be defined in terms of greater numbers; thus 'evil' can be committed to "protect" the supposed greater good; and vice-versa -- but that's probably due to the limits of my imagination.

reminds me of Erik Mueller's thesis (2, Insightful)

Trepidity (597) | more than 3 years ago | (#33684552)

He now does commonsense-reasoning stuff at IBM Research using formal logic, but back in his grad-school days, Erik Mueller [mit.edu] wrote a thesis on building a computational model of daydreaming [amazon.com] .

cathyniuniu (1)

cathyniuniu (1907904) | more than 3 years ago | (#33684556)

I feel so sad

More information (1, Interesting)

Anonymous Coward | more than 3 years ago | (#33684564)

Where is the source with more details / publications for this?

One step closer to GPP (0)

Anonymous Coward | more than 3 years ago | (#33684576)

I think you ought to know I'm feeling very depressed.

AI researchers should be more modest (5, Insightful)

token0 (1374061) | more than 3 years ago | (#33684602)

It's like a XV century man trying to simulate a PC by putting a candle behind colored glass and calling that a display screen. People often think AI is getting really smart and e.g. human translators are getting obsolete (a friend of mine was actually worried about her future as a linguist). But there is a fundamental barrier between that and the current state of automatic german->english translations (remember that article some time ago?), with error rates unacceptable for anything but personal usage.
Some researchers claim we can simulate intelligent parts of the human brain - I claim we can't simulate an average mouse (i.e. one that would survive long enough in real-life conditions), probably not even it's sight.
There's nothing interesting about this 'dreaming' - as long as the algorithm can't really manipulate abstract concepts. Automatic translations are a surprisingly good test for that. Protip: automatically dismiss any article like that if it doesn't mention actual progress in practical applications, or at least modestly admit that it's more of an artistic endeavour than anything else.

Re:AI researchers should be more modest (1)

dbIII (701233) | more than 3 years ago | (#33684856)

This makes me think of Lem's story about making an artificial poet. It's easy, first you just need to create an entire artificial universe for it to live in :)

Re:AI researchers should be more modest (4, Informative)

Eivind (15695) | more than 3 years ago | (#33684914)

AI will deliver real useful advances any day now. And those advances have been right around the corner for the last 25 years. I agree, the field has been decidedly nonimpressive. What tiny advancement we've seen, has almost entirely been attributable to the VAST advances of raw computing-power and storage.

Meanwhile, we're still at a point where trivial algorithms, perhaps backed by a little data, outperform the ai-approach by orders of magnitude. Yes, you can make neural nets, train them with a few thousand common names to separate female names from male names, and achieve 75% hitrate or thereabouts. There's no reason to do that though, because a lot better results are achieved trivially by including lookup-tables with the most common male and female names -- and guessing randomly at the few that aren't in the tables. Including only the top 1000 female and male names, is enough to get a hitrate of 99.993% for the sex of Norwegians, for example. Vastly superior to the ai-approach and entirely trivial.

Translator-programs, work at a level slightly better than automatic dictionaries. That is, given an input-text, look up each sequential word in the dictionary, and replace it with the corresponding word in the target language. Yes, they are -slightly- better than this, but the distance is limited. The machine-translation allows you to read the text, and in most cases correctly identify what the text is about. You'll suffer loss of detail and precision, and a few words will be -entirely- wrong, but enough is correct that you can guesstimate reasonably. But that's true for the dictionary-approach too.

Roombas and friends do the same: Don't even -try- to build a mental map of the room, much less plan vacuuming in a fashion that covers the entirety. Instead, do the trivial thing and take advantage of the fact that machines are infinitely patient: simply drive around in an entirely random way, but do so for such a long time that at the end of it, pure statistical odds say you've likely covered the entire floor.

Re:AI researchers should be more modest (0)

Anonymous Coward | more than 3 years ago | (#33685318)

How come the real experts in any field of science hang out on slashdot all day?

*Eyebrow* (1)

EmporerD (1783764) | more than 3 years ago | (#33684680)

The idea that a machine can daydream is most illogical, captain.

We need emotions to think rationally (2, Insightful)

thebignop (1907962) | more than 3 years ago | (#33684774)

António Damásio, a well-known neuropsychologist already extensively explained why are emotions intrinsically linked to rational thought in his book "Descartes' Error: Emotion, Reason, and the Human Brain", published in 1994. He basically says that without emotion you wouldn't have motivation to think rationally and he studied the case of Phineas Gage, a construction work that got an iron rod crossing through his skull and survived, but stopped having feelings after the accident. I still doubt that they'll get something useful with this project. There is an infinite number of variables that stimulates our emotions and we can't expose a computer to. Not to say that even if we could, nowadays supercomputers doesn't have enough processing power to do the job.

Re:We need emotions to think rationally (1)

symes (835608) | more than 3 years ago | (#33684854)

It stikes me as odd that someone is able to reason that massive head trauma has (only) resulted in a loss of feelings, this loss of feelings has resulted in a deprecation in rational thought, and that therefore we need emotion to think clearly. What is more, if we define rational thought as that which is unemotional then by definition we do not need emotion for rational thought. We are taking something that is extraordinarily complex and reducing it to a few choice phrases. My feelings are that this overly reductionist, compartmentalist approach where we use the blunt tools of language to disect the human mind is deeply flawed.

Re:We need emotions to think rationally (1)

0111 1110 (518466) | more than 3 years ago | (#33685410)

What is more, if we define rational thought as that which is unemotional

But why would we do that? Emotions are a quick fight/flight substitute for rational thought. They are sort of competing for the the same goal of affecting our decisions or actions, but they are very different. If you see/hear a grenade being tossed through your window do you run because of fear/panic or because of a thought: "That grenade will probably explode soon, harming or killing me. I should vacate the premises as quickly as...*boom*" Rational thought is just logical thought. A series of interlocking syllogisms if you will. Talking to yourself in your head in a way that "makes sense" as opposed to word salads.

Re:We need emotions to think rationally (1)

FiloEleven (602040) | more than 3 years ago | (#33684868)

William James was already discussing this stuff before the end of the 19th century. In addition to emotion providing motivation (notice that they are both derived from the same root word), all rationality is derived from experience, and experience includes emotion. It is perfectly rational for one person to be fond of a particular movie because he enjoys the plot, and it is perfectly rational for another to dislike the same film because it reminds him of the sad state his life was in when he first saw it. Most people still don't easily accept such pluralism because in this day and age, among the intelligentsia, its emotional basis is seen as shameful--an amusing paradox.

I'm impressed that you got the diacritical marks to show up on slashdot.

Spock isnt emotionless (1, Insightful)

Anonymous Coward | more than 3 years ago | (#33684796)

Spock isn't emotionless, no vulcans are emotionless in fact. They just learn over time to control their emotions and keep them buried deep within themselves. Big difference between that and being completely devoid of all emotion.

Morality core (2, Funny)

Psaakyrn (838406) | more than 3 years ago | (#33684896)

I guess it is a good idea to build in emotions and that morality core before it starts flooding the Enrichment Center with a deadly neurotoxin.

MORONS: Vulcans NOT "emotionless" (1)

dltaylor (7510) | more than 3 years ago | (#33685022)

They have learned to subordinate their emotions to reason (most of them, anyway).

Anyone who claims that Spock was emotionless is either a moron who clearly didn't understand either the series or the early movies or didn't watch them and is stupid enough to make false statements based on ignorance.

Link to more information (0)

Anonymous Coward | more than 3 years ago | (#33685036)

Some more info here: http://aai.murdoch.edu.au/social-ai

bad (0)

Anonymous Coward | more than 3 years ago | (#33685132)

I feel bad for the researcher.

Artificial Stupidity (1)

w0mprat (1317953) | more than 3 years ago | (#33685158)

*sigh* I don't believe that it's possible to design and build an AI. This is partly because the best and only thinking computers we know of (brains), were not designed at all, they evolved. In fact, to me at least, it seems that whatever underlying mathematical properties of our universe allow and drive evolution are actually fundamental to how consciousness arises in our brains. We think of our brain as computers, but in fact our universe is a computational system and we (and our brains) are self-replicating patterns of complex information. True thinking feeling conscious AI, must arise from the right initial conditions, it cannot be designed, or if design plays a role then it requires a huge amount of natural evolution in the process.

By making ever more complex systems trying to mimic the performance nuanced complexities of human behavior in order to try pass a Turing test or whatever, we're just making dumb rigid algorithims seem smart, by completely missing understanding and recreating the system that give rise to such performance by itself.

We need to develop tools that allow AI to evolve naturally within computational systems. With the right rules and enough iterations it'll happen by itself. Dare I say, it doing it all itself is necessary.

Re:Artificial Stupidity (2, Informative)

oodaloop (1229816) | more than 3 years ago | (#33685298)

I don't believe that it's possible to design and build an AI. This is partly because the best and only thinking computers we know of (brains), were not designed at all, they evolved.

So we can't design anything that evolved? Viruses evolved, and we made one of those.

Instant Oxymoron: Just add logic! (0)

Anonymous Coward | more than 3 years ago | (#33685294)

Is it just me or does anyone else see a contradiction in the phrase "simulate 'free thinking'?"

Load More Comments
Slashdot Login

Need an Account?

Forgot your password?

Submission Text Formatting Tips

We support a small subset of HTML, namely these tags:

  • b
  • i
  • p
  • br
  • a
  • ol
  • ul
  • li
  • dl
  • dt
  • dd
  • em
  • strong
  • tt
  • blockquote
  • div
  • quote
  • ecode

"ecode" can be used for code snippets, for example:

<ecode>    while(1) { do_something(); } </ecode>