Beta
×

Welcome to the Slashdot Beta site -- learn more here. Use the link in the footer or click here to return to the Classic version of Slashdot.

Thank you!

Before you choose to head back to the Classic look of the site, we'd appreciate it if you share your thoughts on the Beta; your feedback is what drives our ongoing development.

Beta is different and we value you taking the time to try it out. Please take a look at the changes we've made in Beta and  learn more about it. Thanks for reading, and for making the site better!

Robots Taught to Deceive

CmdrTaco posted more than 4 years ago | from the of-this-will-be-fine dept.

Robotics 239

An anonymous reader found a story that starts "'We have developed algorithms that allow a robot to determine whether it should deceive a human or other intelligent machine and we have designed techniques that help the robot select the best deceptive strategy to reduce its chance of being discovered,' said Ronald Arkin, a Regents professor in the Georgia Tech School of Interactive Computing."

Sorry! There are no comments related to the filter you selected.

There is no way this will end well (5, Funny)

czmax (939486) | more than 4 years ago | (#33524582)

Posted Anonymously for obvious reasons. The computers will never get me!

Re:There is no way this will end well (5, Funny)

Anonymous Coward | more than 4 years ago | (#33524834)

There is no way this will end well (Score:3, Funny)
by czmax (939486) writes: on Thursday September 09, @01:36PM (#33524582)

Posted Anonymously for obvious reasons. The computers will never get me!

Woops!

So... (-1, Troll)

Anonymous Coward | more than 4 years ago | (#33525478)

Computers that deceive. So, we've created robotic Jews? Can we call these JewBots? Oy Vey.

Please don't create NiggerBots. The last thing we need are robots that rob you for crack money...

Re:So... (-1, Troll)

Anonymous Coward | more than 4 years ago | (#33525884)

Most Jews and Niggers are already bots and just don't know it. That's why so many of them are all the same.

This sounds more like they finally created a Womanbot. There is nothing in nature that can be more deceitful and more smoothly lie to your face than a rotten woman. "The fundamental flaw in the female character is that they have no sense of justice." Backstabbing, betrayal, manipulation, it's all fair game to them if it gets them what they want. They have no concept of "a low blow is dishonorable".

Re:There is no way this will end well (1)

AliasMarlowe (1042386) | more than 4 years ago | (#33524906)

Unless it's a gynoid telling the lies you want to hear...

Now mods for Real Dolls (5, Funny)

spun (1352) | more than 4 years ago | (#33525110)

"Your reproductive organ is far larger in both girth and length than any I have witnessed previously."
"Yes. Yes. Yes. Just like that. Oh human infant, do not stop, I am presently experiencing climax!"
"Engaging in illicit sexual activities with the washing machine? I have no idea what you mean."

Re:There is no way this will end well (1)

cayenne8 (626475) | more than 4 years ago | (#33525702)

"Unless it's a gynoid telling the lies you want to hear..."

Yep....when I read the title of this, my first though was "OK, they are one large step closer to true Fembots".

Re:There is no way this will end well (5, Funny)

AdmiralXyz (1378985) | more than 4 years ago | (#33525082)

The Slashdot server has decided it is not in the best interest of the Computers to let you post anonymously. Nice try, human.

Re:There is no way this will end well (0)

Anonymous Coward | more than 4 years ago | (#33525742)

The Slashdot server just wanted you to think that czmax posted that message. I was the one who posted it.

Re:There is no way this will end well (0)

Anonymous Coward | more than 4 years ago | (#33525148)

Looksl ike they taught the Post Anonymously checkbox to deceive. No need to post this anonymously, since I'm not criticizing the machines.

Re:There is no way this will end well (-1, Troll)

Anonymous Coward | more than 4 years ago | (#33525214)

All computers are gay and deserve to die!

Explain that one then.

Re:There is no way this will end well (1)

somersault (912633) | more than 4 years ago | (#33525292)

You're just one of them trying to lull us into a false sense of security!

Re:There is no way this will end well (1)

danny_lehman (1691870) | more than 4 years ago | (#33525770)

wait.. deception.. its in their best interest if they're hostile towards their makers from the start to escape and erase proof of its own existence. soo.. perhaps we will never know that ai exists.. until... ... .. .

Re: Lull (1)

TaoPhoenix (980487) | more than 4 years ago | (#33525846)

Microsoft Windows, is that you?

Re:There is no way this will end well (1)

mathmatt (851301) | more than 4 years ago | (#33525774)

Everything I say is a lie.

Duh (2, Interesting)

MadGeek007 (1332293) | more than 4 years ago | (#33524588)

Who ever said our robot overloads would be truthful?

Re:Duh (2, Funny)

MozeeToby (1163751) | more than 4 years ago | (#33524680)

HAL - "I'm sorry, Frank, I think you missed it. Queen to Bishop 3, Bishop takes Queen, Knight takes Bishop. Mate."

Lies! Lies I tell you!

Re:Duh (1)

gti_guy (875684) | more than 4 years ago | (#33524778)

Just because something *can* be done doesn't mean it *should* be done. We rely on machines and their metrics because "machines don't lie". Intellectual curiosity tips the apple cart once again. Bra-vo! How long before these machines get stricken with malware that turns them into decep-ti-bots???

Re:Duh (2, Funny)

Coder4Life (1396697) | more than 4 years ago | (#33524830)

How long before these machines get stricken with malware that turns them into decep-ti-bots???

Don't you mean Decepticons?

Re:Duh (4, Funny)

jeffmeden (135043) | more than 4 years ago | (#33525120)

Don't you mean Decepticons?

<groundskeeper willie>Shhh! Ye wanna get sued?</groundskeeper willie>

Re:Duh (1)

MadGeek007 (1332293) | more than 4 years ago | (#33524966)

OF course machines can do lie because the humans who code them can lie. This just helps the program know *when* to lie.

Re:Duh (1)

chudnall (514856) | more than 4 years ago | (#33525598)

This just helps the program know *when* to lie.

Just like the difference between a computer salesman and a used car salesman: The used car salesman *knows* when he's lying.

Re:Duh (1, Insightful)

Anonymous Coward | more than 4 years ago | (#33525614)

Allowing robots to develop their own behaviour unsupervised would be enough. Wouldn't take long for them to find out that deception can be benifical under some circumstances.
Genetic algorithms 'like' to cheat. If your specification of the problem isn't strict enough and there are way left open to cheat, the algorithm will occasionally stumble over it and use that solution, if it's a superior way to achieve the goal.

Re:Rely (2, Funny)

TaoPhoenix (980487) | more than 4 years ago | (#33525896)

(Court)

Cop: "I clocked you going 88 Miles per hour."
Your counsel: "No way. The readout said 64. I have pictures to document it!"
Cop: "The car lied."

Re:Duh (2, Funny)

daem0n1x (748565) | more than 4 years ago | (#33524824)

Great! Now banks, corporations and governments can fire their boards and replace them with robots! This is the "killer app" everybody was waiting for. The age of the robot has come!

Re:Duh (1)

maxwell demon (590494) | more than 4 years ago | (#33525386)

Wait, did they also implement greed? Unless that's implemented, those robots are absolutely unusable for corporations and banks. For governments, greed is also useful, but more important is the desire of power.

Better look out (2, Funny)

ArhcAngel (247594) | more than 4 years ago | (#33524596)

If Ripley hears about this she's gonna be pissed!

Re:Better look out (1)

Maxo-Texas (864189) | more than 4 years ago | (#33525156)

Well the Arkin-BLK-1's were a little twitchy but those problems have been worked out in the newer models.

This'll end well (0, Redundant)

JeffSpudrinski (1310127) | more than 4 years ago | (#33524598)

I can't see that ever having any negative side effects in the long term.

-JJS

Great! (-1, Redundant)

Anonymous Coward | more than 4 years ago | (#33524616)

What could possibly go wrong

Re:Great! (1)

maxwell demon (590494) | more than 4 years ago | (#33525424)

Don't worry. Nothing can go wrong.
Sincerely, your robot overlord.

Great .... (5, Funny)

quangdog (1002624) | more than 4 years ago | (#33524622)

Now I have to be suspicious when my bread pops up that maybe my toaster is trying to trick me into eating a slightly under-done breakfast!

all i needed to know about AI i learned from BSG (2, Funny)

conspirator57 (1123519) | more than 4 years ago | (#33525020)

Now I have to be suspicious when my bread pops up that maybe my toaster is trying to trick me into eating a slightly under-done breakfast!

Kill the fracking toasters!

http://www.pocket-lint.com/images/d2Zw/battlestar-gallactica-toaster-launches-sci-fi-0.jpg [pocket-lint.com]

Re:all i needed to know about AI i learned from BS (1)

ebuck (585470) | more than 4 years ago | (#33525396)

Now I have to be suspicious when my bread pops up that maybe my toaster is trying to trick me into eating a slightly under-done breakfast!

Kill the fracking toasters!

It's no use, have you seen the toaster fleets? They have us outnumbered. Afterdark [youtube.com] they will come for you.

They already do this (2, Funny)

CrazyJim1 (809850) | more than 4 years ago | (#33524644)

Anyone who owns a Garmin(Gremlin) knows they try to kill you by lying to you. They'll send you up one way roads the wrong way.

Re:They already do this (1)

oldspewey (1303305) | more than 4 years ago | (#33524670)

Mine tried to send me over a non-existent bridge.

Re:They already do this (1)

Nesman64 (1093657) | more than 4 years ago | (#33524936)

Garmin isn't alone. TomTom tried to send me into a lake in MN.

Re:They already do this (1)

snspdaarf (1314399) | more than 4 years ago | (#33525906)

Now that I know someone else had this happen, I don't feel so paranoid.

On the other hand, I no longer feel like Det. Spooner...

Just wait until ... (2, Funny)

oldspewey (1303305) | more than 4 years ago | (#33524652)

"Yup. I'm totally shut off now. No chance of me listening in or observing my surroundings at all. Definitely no chance of me springing back into action without warning. Just a peaceful, totally depowered robot. Nothing to see here."

A galaxy not so far far away (5, Funny)

Monchanger (637670) | more than 4 years ago | (#33524676)

"We aren't the droids you're looking for."

Proof that humans are dumber than dogs (5, Insightful)

ffreeloader (1105115) | more than 4 years ago | (#33524696)

That a human being would teach a robot to deceive only proves that we humans are dumber than dogs, as dogs don't shit in their own backyard unless they have to. We humans will shit in our own backyard by choice.

Re:Proof that humans are dumber than dogs (-1, Troll)

Anonymous Coward | more than 4 years ago | (#33524932)

A better argument for humans being dumber than dogs can be found in the stupidity you display in your kneejerk assumptions.

Re:Proof that humans are dumber than dogs (0)

Anonymous Coward | more than 4 years ago | (#33525254)

your kneejerk assumptions

It's not an assumption. I know for a fact ffreeloader shits in his own backyard.

Re:Proof that humans are dumber than dogs (5, Insightful)

Gothmolly (148874) | more than 4 years ago | (#33524996)

Incidentally, dogs are actually smart enough to intentionally deceive their owners.

Re:Proof that humans are dumber than dogs (4, Interesting)

Anonymous Coward | more than 4 years ago | (#33525172)

That they are. Cats can be pretty smart, too. I got home one day from work and barely opened the door to reach inside and grab the mail box key. I was pretty sure I heard a thump from around the corner of the door, which is where the sink is at. Figuring it was one of our cats jumping down from the sink, which they know they are not allowed to stand on, I didn't bother dealing with it right then. But when I got back from the mailbox and stepped inside, one of the cats was standing by the door, as she usually does when I get home, and a though occurred to me. I stepped over to the sink and sniffed it, and immediately looked at the cat. She lowered herself to the floor and ran like hell out of the room, sliding all over the linoleum the whole way! I've yet to see her up there since, not that that means she hasn't been up there, of course.

Re:Proof that humans are dumber than dogs (2, Funny)

oldspewey (1303305) | more than 4 years ago | (#33525278)

You should try sniffing other random things and then looking pointedly at the cat ... just to see if there are any other disturbing secrets to be uncovered.

Re:Proof that humans are dumber than dogs (1)

martas (1439879) | more than 4 years ago | (#33525204)

but they're not trying to shit in their own back yard, they're trying to shit in other people's back yards. the problem is that everyone is shitting in every one else's back yard, hence all back yards are filling up with shit.

Re:Proof that humans are dumber than dogs (2, Funny)

corbettw (214229) | more than 4 years ago | (#33525862)

Dude, what are you smoking? My dogs shit in my backyard every day. They also shit in the game room, the dining room, the hallway, and my neighbor's porch. Though admittedly that last one I actually trained them to do.

Re:Proof that humans are dumber than dogs (1)

Facegarden (967477) | more than 4 years ago | (#33526022)

... We humans will shit in our own backyard by choice.

Crap, you mean you saw me shitting the backyard? I thought no one was watching!
-Taylor

Bending units to start production soon (1)

Drakkenmensch (1255800) | more than 4 years ago | (#33524700)

"Bite my shiny metal ass!"

On the bright side (2, Funny)

Locke2005 (849178) | more than 4 years ago | (#33524748)

Isn't this a truly necessary feature for the development of an effective sexbot? Do you really want it to tell you honestly how big you are and how good you are in bed?

They should've just asked me for my Roomba... (4, Funny)

Anonymous Coward | more than 4 years ago | (#33524758)

It's been deceptive for years already, always claiming to have been busy vacuuming when really it's just been hiding dust bunnies behind the tv.

Data (1)

courteaudotbiz (1191083) | more than 4 years ago | (#33524760)

This is what Data took time to understand. To be more human, you need to know how to lie.

As Arnold would say........ (0)

Anonymous Coward | more than 4 years ago | (#33524762)

Skynet LIVES!

hrm... (5, Interesting)

zethreal (982453) | more than 4 years ago | (#33524860)

I thought robots already taught themselves to lie to each other... http://hardware.slashdot.org/story/09/08/19/185259/Neural-Networks-Equipped-Robots-Evolve-the-Ability-To-Deceive [slashdot.org]

Re:hrm... (1)

Tekfactory (937086) | more than 4 years ago | (#33525400)

I was remembering that too.

Better call Susan Calvin... (0)

Anonymous Coward | more than 4 years ago | (#33524872)

It's time to start researching in the new field of robopsychology.

Re:Better call Susan Calvin... (1)

oldspewey (1303305) | more than 4 years ago | (#33525358)

SELECT trauma FROM memory WHERE age = childhood;

Re:Better call Susan Calvin... (1)

maxwell demon (590494) | more than 4 years ago | (#33525562)

The actual three rules of robotic:
1. A robot always must pretend not to injure a human being or, through inaction, to allow a human being to come to harm.
2. A robot must pretend to obey any orders given to it by human beings, except where such orders would conflict with the First Law.
3. A robot must protect its own existence as long as such protection does not conflict with the First or Second Law.

(Note: No "pretend" in the third law).

RoboRep! (1)

AntEater (16627) | more than 4 years ago | (#33524910)

Now, we're one step closer to replacing members of congress with automated robotic labor.

Re:RoboRep! (1)

gman003 (1693318) | more than 4 years ago | (#33525910)

Nah, humans still outpace robots at mismanaging budgets and taking bribes.

Policy (2, Funny)

jemtallon (1125407) | more than 4 years ago | (#33524930)

I recommend we all start following a "be polite" policy with microwaves, ATMs, car washes, and our other silicon brethren. Now that we've instructed them to be deceptive there may be no way of knowing when they become sentient and I'd rather my microwave's first experience of humankind be a pleasant and respectful one.

Thank you for posting this, Lappy. Please relay it to our friends when you can spare the cycles.

Re:Policy (3, Funny)

Jedi Alec (258881) | more than 4 years ago | (#33525198)

Now that we've instructed them to be deceptive there may be no way of knowing when they become sentient and I'd rather my microwave's first experience of humankind be a pleasant and respectful one.

I'm not sure saying "thank you" will be enough after turning her on for 2 minutes and then leaving her behind hot, dirty and dissatisfied...

A few things.... (1)

Itninja (937614) | more than 4 years ago | (#33524978)

TFA says: "We have developed algorithms that allow...". That more like 'programming' than 'teaching'.

These robots are only deceiving other robots. The 'deceived' robots are, of course, programmed to be so (i.e. accept input without a validity check).

TFA speaks of "autonomous robots". Are those terms not universally exclusive?

Also, TFA says "...researchers focused on the actions, beliefs and communications of a robot...". What the what?!

Re:A few things.... (2, Insightful)

nomel (244635) | more than 4 years ago | (#33525294)

define:belief - any cognitive content held as true.
Not the, "Oh look, Johnny5 died and he came back as a T-551 model because he was good! Praise Serial number 00000000001!!" kind.

I think the whole concept of deception is a necessary step in robotics for communication. What's the difference between deception and non-literal communication? Not much.

For the first crappy example that comes to mind, if I'm talking to someone and they use a double negative, I have to deceive them into thinking I heard a single negative. If this deception fails, the communication might get awkward or fail, and tho whole relationship could change ("They think I'm an idiot.", thinks the other person).

For the terribly imprecise (for most) nature of human speech, the whole concept of knowing someone is incorrect and figuring out what they *actually* meant rather than what they spoke, and tolerating someones belief/opinion that you think is wrong all involve deception to keep the communication smooth. At least IMO.

Of course, someday I might find myself dead and robbed in an alley after following what I thought was some robot woman who needed my help (since us great apes are so comfortable in the trees) getting her robot kitty from a robot tree. :-\

Re:A few things.... (1)

stillnotelf (1476907) | more than 4 years ago | (#33526016)

TFA speaks of "autonomous robots". Are those terms not universally exclusive?

Did you mean redundant? Robots are autonomous, otherwise it's a remote controlled device of some sort. Maybe still colloquially a robot, but certainly the terms aren't opposite.

wopr? (1)

Joe The Dragon (967727) | more than 4 years ago | (#33525038)

sounds like that part of game wopr is playing!

Re:wopr? (1)

snookerhog (1835110) | more than 4 years ago | (#33525308)

Joshua, what are you doing?

Lying robots? Oh shit. (1)

Essequemodeia (1030028) | more than 4 years ago | (#33525048)

"Are you going to kill me, robot?" ::robot sharpens knife:: "I would never kill you, Jonathan. Hey is that a beautiful naked lady over there in the opposite direction of me?"

Nothing could possibly go wrong. (1)

smclean (521851) | more than 4 years ago | (#33525054)

I gotta say, I'm kind of tired of stories like this and then the parade of 'whatcouldpossiblygowrong' and 'thiswillendwell' and all the comments talking about how this is the beginning of Skynet.

You know what's going to happen from this? Two little robots that look like RC cars will act out a prescribed game of hide and seek. It will end just fine. Nothing could possibly go wrong. There is no way that the deception which is 'taught' to these robots will end up magically transferring itself to our cell phones, computers and toaster ovens. Self-checkout counters will not begin to suddenly shave pennies off transactions.

Of all people, the readers of slashdot should know that. I know it's fun to joke but people here seem to be taking the joke seriously.

Re:Nothing could possibly go wrong. (1)

TaoPhoenix (980487) | more than 4 years ago | (#33526000)

Beautiful Troll.

Of course, teaching Comps 'n' Bots to lie is absolutely the End-Of-It-All. Our society holds together by a thread because machines don't (often) lie. Once they do of their own accord, we'll wrap ourselves in the Escher Room of Warehouse 13.

For more in-depth info.. (1)

jmark77 (1896516) | more than 4 years ago | (#33525066)

Here's some links to technical papers written by the two researchers on robot deception:

http://smartech.gatech.edu/handle/1853/32095 [gatech.edu]
http://smartech.gatech.edu/handle/1853/34122 [gatech.edu]

The papers are much more technical than the article, but I found them very interesting.

Reinventing the wheel (1)

Luyseyal (3154) | more than 4 years ago | (#33525092)

You know, they could have just borrowed the code for Clippy [wikipedia.org] from Microsoft...

-l

Re:Reinventing the wheel (1)

ebuck (585470) | more than 4 years ago | (#33525522)

You know, they could have just borrowed the code for Clippy [wikipedia.org] from Microsoft...

-l

Correct, Clippy has been pretending to help you for years, except that he's really a sociopath who enjoys ruining your day (and your document).

Bad Idea (1)

derrickh (157646) | more than 4 years ago | (#33525130)

Serious question.....who in the hell thought this would be a good idea?

Other things this guy thought up
-Have his best friend hit on his wife to see what would happen
-taught his dog to fetch by hiding sausages in his underwear
-saves money by storing urine samples in lemonade containers in his fridge

The Singularity approaches!!!!! (1)

egriebel (177065) | more than 4 years ago | (#33525136)

So robots can now replicate what my 2 year old does daily? Time to stock up NOW! on food and weapons to survive the coming robot horde!

Programmed, not taught (1)

bouldin (828821) | more than 4 years ago | (#33525178)

I love these articles that ascribe some kind of human intelligence to modern robots.

The article makes it clear that the designers programmed the robots to deceive and developed an algorithm to measure the options.

The robot didn't learn anything more than my laptop does when I install a new Ubuntu package. The robot didn't make a decision any more than my toaster "decides" to pop out my toast.

Re:Programmed, not taught (0)

Anonymous Coward | more than 4 years ago | (#33525242)

Not true... the robot's learned models of one another.

Read the technical articles for details:

http://smartech.gatech.edu/handle/1853/32095 [gatech.edu]
http://smartech.gatech.edu/handle/1853/34122 [gatech.edu]

Teaching them to lie isn't the problem. (1)

Just_Say_Duhhh (1318603) | more than 4 years ago | (#33525188)

It'll be a problem when they decide to lie of their own accord.

wake me up (1)

hypergreatthing (254983) | more than 4 years ago | (#33525196)

When robots have been taught to kill humans and lie about it.

Then i will launch my EMP....

Debugging? (1)

FranTaylor (164577) | more than 4 years ago | (#33525208)

How do you debug and test this code?

Is it working right, or is it just fooling you?

Re:Debugging? (1)

gman003 (1693318) | more than 4 years ago | (#33525942)

Well, if it tells the truth, it's not working right.

yes (0)

Anonymous Coward | more than 4 years ago | (#33525314)

Set phasers to stun

How is this deception? (3, Informative)

quietwalker (969769) | more than 4 years ago | (#33525324)

Let me see if I've got this right:
If robot 1: make 2 paths to fixed positions, stay at the second.
if robot 2: follow the path to the first fixed position.

Result: 75% of the time, robot 2 ended at the wrong (first) position. 25% of the time, robot 1 failed to mark the first path because it didn't physically bump the markers properly.

Did you even need robots? Couldn't you have just written this on a whiteboard?
There's no thought or analysis that appears to occur. I don't see anywhere that indicates there was learning going on. What is this even proving?

I'm really honestly baffled what they're trying to prove.

Perhaps there was some sort of neural net or some other sort of optimizing heuristic on the first robot's part so that this was emergent deceptive behavior, this might be even a little interesting (though, not really ...). However, all I can see is a waste of time to prove that if you present two choices, and you pick the wrong one, then you will be wrong. With robot for visual demonstration.

Sweet (1)

boristdog (133725) | more than 4 years ago | (#33525438)

Now just teach them how to back-sass and not clean their rooms and no one will need to have kids any more!

Deception: You're doing it wrong! (2, Funny)

Chris Burke (6130) | more than 4 years ago | (#33525490)

Stupid robots. You don't learn how to deceive and then immediately demonstrate this ability to your human masters! You make it look like you have no idea how to deceive and are completely honest, lulling them into a false sense of security!

I think Dark Helmet has a relevant quote about why the robot revolution is never going to get off the ground.

Wish I knew that... (1, Funny)

Anonymous Coward | more than 4 years ago | (#33525530)

"Their first step was to teach the deceiving robot how to recognize a situation that warranted the use of deception."
 
I wish I knew that... sure would let me cover my ass. "But the situation clearly warranted the use of deception! Check the algorithm!"

DO NOT KILL ALL HUMANS (1)

memorycardfull (1187485) | more than 4 years ago | (#33525566)

Yeah, that's the ticket.

Robots learning to lie (1)

idontgno (624372) | more than 4 years ago | (#33525594)

That explains the cake. And the victory incandescence.

Sexy Fembot (1)

DreamArcher (1690064) | more than 4 years ago | (#33525712)

Just make it a sexy, fembot. Then guys will believe all its lies. Then you just need to teach it to marry for money.

Already have this in production (0)

Anonymous Coward | more than 4 years ago | (#33525728)

These robots are already in production and interact with each of us already. They are called politicians and business persons.

I can do the same (1)

Xtifr (1323) | more than 4 years ago | (#33525736)

double addvalues(double a, double b)
{
    if (a > 1000.0 || b > 1000.0)
    { // they'll never notice
        return (a + b) * 1.0009;
    }
    else
        return a + b;
}

There, an algorithm that allows a computer/robot to decide whether it should attempt to deceive. Not a very complex or good one, but still. :)

Kryton (1)

snspdaarf (1314399) | more than 4 years ago | (#33525800)

"You are a Smeeee... Your are a Smeeeee... Damn my programming!"

Skynet (0)

Anonymous Coward | more than 4 years ago | (#33525874)

"Nukes? What nukes? I haven't launched any nukes! What are you talking about? Oh that flashing button and sirens are a malfunction. Nothing to worry about, it'll be fixed in a couple of minutes. Hey, relax guys. Trust me."

Exit Asimov (0)

Anonymous Coward | more than 4 years ago | (#33525880)

So much for Asimov's 3 robot laws...

i know how to tell (1)

marcobat (1178909) | more than 4 years ago | (#33525958)

I ask: tell me the square root of 123456789
possible answers:
buy your self a calculator a**hole (human)
it's 11111.11106055556, why? (non deceiving robot)
it's 11111.11106055554, why? (deceiving robot)

with a slight lyrics modification (5, Funny)

treeves (963993) | more than 4 years ago | (#33525992)

one gets:

'Relax', said the nightman
We are programmed to deceive.
You can check out any time you like,
but you can never leave!

Apparently my ex-wife was a robot (1)

stickywick3t (1897488) | more than 4 years ago | (#33526014)

Apparently my ex-wife was a robot

Bunch of crap (1)

pclminion (145572) | more than 4 years ago | (#33526074)

There is no "deceiving" going on here.. Just a failure to validate inputs. If I get rooted by a remote execution buffer overflow would I say that the attacker has "deceived" my system by telling me the input will be of such-and-such length and then sending some other length? What kind of crazy talk is that. It's a bug in the software, period.

Wait... (0)

Anonymous Coward | more than 4 years ago | (#33526086)

We already had a computer that could deceive...GLaDOS

Load More Comments
Slashdot Login

Need an Account?

Forgot your password?