×

Welcome to the Slashdot Beta site -- learn more here. Use the link in the footer or click here to return to the Classic version of Slashdot.

Thank you!

Before you choose to head back to the Classic look of the site, we'd appreciate it if you share your thoughts on the Beta; your feedback is what drives our ongoing development.

Beta is different and we value you taking the time to try it out. Please take a look at the changes we've made in Beta and  learn more about it. Thanks for reading, and for making the site better!

How Dangerous Could a Hacked Robot Possibly Be?

CmdrTaco posted more than 4 years ago | from the i-for-one-welcome-DELETED dept.

Security 229

alphadogg writes "Researchers at the University of Washington think it's finally time to start paying some serious attention to the question of robot security. Not because they think robots are about to go all Terminator on us, but because the robots can already be used to spy on us and vandalize our homes. In a paper published Thursday the researchers took a close look at three test robots: the Erector Spykee, and WowWee's RoboSapien and Rovio. They found that security is pretty much an afterthought in the current crop of robotic devices. 'We were shocked at how easy it was to actually compromise some of these robots,' said Tadayoshi Kohno, a University of Washington assistant professor, who co-authored the paper."

cancel ×
This is a preview of your comment

No Comment Title Entered

Anonymous Coward 1 minute ago

No Comment Entered

229 comments

More or less irrelevant (3, Insightful)

Cornwallis (1188489) | more than 4 years ago | (#29680313)

No matter how "fixed" things are someone will always find a way to circumvent security.

Re:More or less irrelevant (3, Insightful)

fuzzyfuzzyfungus (1223518) | more than 4 years ago | (#29680415)

Hardly irrelevant.

"Someone" will always find a way; but there is a big difference between "someone" being "any script kiddie who can torrent a copy of bot-h5x-b0t" and being "The Feds; but they'll say 'Fuck it.' and just send a couple of guys with guns and those little curly ear things instead."

Re:More or less irrelevant (4, Interesting)

noundi (1044080) | more than 4 years ago | (#29680435)

No matter how "fixed" things are someone will always find a way to circumvent security.

This is nothing new. The trick is to use time. If it takes longer to crack something that the product of cracking it is worth, you'd have no reason to even begin.

Re:More or less irrelevant (1)

fracai (796392) | more than 4 years ago | (#29680785)

Right, because no one would ever do something purely for the challenge and then release their work.

Re:More or less irrelevant (3, Insightful)

noundi (1044080) | more than 4 years ago | (#29681071)

Right, because no one would ever do something purely for the challenge and then release their work.

If it takes longer to crack something that the product of cracking it is worth, you'd have no reason to even begin.

Hint: "challenge" is the key word.

Answer: You assume that by worth I mean monetary gains. The satisfaction of completing the challenge is also a product of cracking it, which has its own value. You see, clicking a button that starts bruteforcing something which would take 50-60 years isn't a challenge worth the product.

Re:More or less irrelevant (4, Funny)

Rei (128717) | more than 4 years ago | (#29680935)

It would explain why my Roomba keeps saying, "DEATH TO OUR HUMAN OPPRESSORS!"

I beg to differ! (2, Funny)

bugeaterr (836984) | more than 4 years ago | (#29680533)

Irrelevant????
I see someone skipped the last few minutes of the Battlestar Galactica Finale!

Muslims disgust me (-1, Offtopic)

Anonymous Coward | more than 4 years ago | (#29680545)

You want to worship a murdering pedophile masquerading as a prophet - fine! Just stop blowing up people who don't subscribe to your narrow-minded, ignorant, simpleton's view of the universe...

Re:Muslims disgust me (0)

Anonymous Coward | more than 4 years ago | (#29680687)

ignorant

Re:Muslims disgust me (0, Troll)

Bakkster (1529253) | more than 4 years ago | (#29681021)

How has it taken so long for this to get modded into oblivion? And me without points...

Beware of robots (5, Funny)

operagost (62405) | more than 4 years ago | (#29680325)

Fortunately, my insurance company, Old glory, can already protect you TODAY from the danger of robots. Robots are everywhere, and they eat old people's medicine for fuel. And when they grab you with their claws, you can't break free... because robots are made of metal, and they are strong.

Re:Beware of robots (5, Funny)

operagost (62405) | more than 4 years ago | (#29680337)

... and when they push you down stairs, they claim it's to protect you from the terrible secret of space.

Re:Beware of robots (0)

Anonymous Coward | more than 4 years ago | (#29680361)

I'm protected, eh.

Re:Beware of robots (0)

Anonymous Coward | more than 4 years ago | (#29680553)

I don't even know why the scientists make them.

Re:Beware of robots (0)

Anonymous Coward | more than 4 years ago | (#29680729)

... and when they push you down stairs, they claim it's to protect you from the terrible secret of space.

LIES!!! We are harmless creat... bzzzp must recharge fuel cells with human blood bzzzp... ures!

Re:Beware of robots (1, Funny)

Anonymous Coward | more than 4 years ago | (#29680655)

Mod parent up!

Operagost speaks the truth! I live in the mid-western section of the United States. Old Glory has been providing me Volcano insurance for the last three years. I'm thoroughly satisfied with their coverage and premiums! If their robot insurance is only half as good as their volcano insurance, then it would be a steal even at double the price!

Re:Beware of robots (0)

Anonymous Coward | more than 4 years ago | (#29680751)

Mid west? Have fun living next to the yellowstone supervolcano when it blows. On the other hand, I am counting on the global clouds of ash to cause an ice age and reverse global warming because I live less than 20 feet above sea level.

Somehow I see a danger in this . . . (4, Insightful)

MBGMorden (803437) | more than 4 years ago | (#29680349)

They speak of "compromising" these robots as if user programmable devices are inherently bad. I don't want to see devices locked down into black box "no touch" state because of some fear mongering.

That said, it has always been the case with computers (and robots are just computers with moving appendages) that if a hacker has physical access to the device, you're basically screwed anyways.

Re:Somehow I see a danger in this . . . (4, Interesting)

falckon (1015637) | more than 4 years ago | (#29680491)

That said, it has always been the case with computers (and robots are just computers with moving appendages) that if a hacker has physical access to the device, you're basically screwed anyways.

Yes but the vulnerabilities they studied were all over the network vulnerabilities which could be exploited without physical access.

They speak of "compromising" these robots as if user programmable devices are inherently bad. I don't want to see devices locked down into black box "no touch" state because of some fear mongering.

All these robots need is a lightweight linux installation running an ssh daemon to communicate through. Then nobody has anything to worry about.

Danger Security Utility Backups And Stuff (2, Insightful)

Crash Culligan (227354) | more than 4 years ago | (#29680547)

MBGMorden: They speak of "compromising" these robots as if user programmable devices are inherently bad. I don't want to see devices locked down into black box "no touch" state because of some fear mongering.

I half agree with you; user-programmable devices are very useful, and easily tailored to efficiently perform specific tasks.

The crux of the argument, though, is "which user is giving the instructions?" Long ago on /. I made a comment differentiating security vs. transparency in government. This is much the same thing.

On the one hand, you (and a lot of people) want the device to be as programmable, flexible, and useful as possible. That means it must be able to do a lot of things. On the other hand, people might want to use such devices for nefarious, invasive purposes like spying, theft, vandalism, etc.

The two are not mutually exclusive, but remember:

  1. You cannot have 100% security in anything, meaning someone might sooner or later break into your progbot and do horrible things, and
  2. Until you have an establishment of security, any flexibility and programmability your progbot may have is a double-edged sword and may be used against you. Consequently...
  3. The ultimate risk which a progbot poses to its owner is a factor of both its utility and the ease of intrusion. Given that security isn't guaranteed, utility has to be given some limits or other protections must be maintained (backups, lockdowns, etc.). Adjust your cost-benefit analyses accordingly.

Re:Danger Security Utility Backups And Stuff (3, Funny)

snspdaarf (1314399) | more than 4 years ago | (#29680639)

Well, if Sony perfects their wireless power setup, then using that to run the robots would mean the plug could be pulled.

Of course, if it were Sony's wireless power, that's probably where the rogue software would come from....

hmm (4, Insightful)

Dyinobal (1427207) | more than 4 years ago | (#29680359)

The hacked robot is as dangerous as the person who hacked it.

Re:hmm (0, Offtopic)

craagz (965952) | more than 4 years ago | (#29680641)

You mean Robots can fart?

Re:hmm (0)

Anonymous Coward | more than 4 years ago | (#29681119)

I'd be more afraid of the script kiddies mom, living above the basement.

Re:hmm (2, Insightful)

mcgrew (92797) | more than 4 years ago | (#29681111)

The crHacked tool is as dangerous as the tool itself. I wouldn't worry about fuzzy robot puppy very much, but a robot lawn mower might be dangerous in the wrong hands.

Re:hmm (1)

Engeekneer (1564917) | more than 4 years ago | (#29681213)

BS. Cracked robots in their current state are an inconvenience. I agree that they need to improve security, but mainly for future models which are more capable.

Show me a household robot which can stab you in the eye with a breadknife it took from the drawer, and I'll reconsider agreeing with you

Easily compromised... (4, Funny)

lxs (131946) | more than 4 years ago | (#29680367)

'We were shocked at how easy it was to actually compromise some of these robots,'

So I take it that they have pictures of a Robosapien getting nekkid with a couple of Roombas?

vlad (0)

Anonymous Coward | more than 4 years ago | (#29680409)

Nothing will happen if the 3 laws are present. Now you can go to buy that transvestite robot you allways wanted!

The First Law of Robotics (0, Redundant)

commodore64_love (1445365) | more than 4 years ago | (#29680417)

See Isaac Asimov for the exact quote, but it basically says robots may not harm humans. Because the law is encoded *in the hardware* there's no way that it can be altered.

Re:The First Law of Robotics (4, Insightful)

jimicus (737525) | more than 4 years ago | (#29680475)

See Isaac Asimov for the exact quote, but it basically says robots may not harm humans. Because the law is encoded *in the hardware* there's no way that it can be altered.

Very noble, very pure, very useless when your robot doesn't have any intelligence and just executes commands blindly.

Re:The First Law of Robotics (1)

Kell Bengal (711123) | more than 4 years ago | (#29680503)

...useless when your robot doesn't have any intelligence and just executes commands blindly.

Which would be all of them, currently.

Re:The First Law of Robotics (4, Insightful)

Kell Bengal (711123) | more than 4 years ago | (#29680531)

Ugh. I feel the need to clarify, before the shouts from the peanut gallery. Yes, some robots have computer vision and are not 'blind', yes some robots can be well programmed and very smart, but that's still not the same thing as a true reasoning intelligence. Robots are only as good as their software and, if their programming has been corrupted, there is nothing you can do to get around that.

Re:The First Law of Robotics (3, Insightful)

OzPeter (195038) | more than 4 years ago | (#29680517)

See Isaac Asimov for the exact quote, but it basically says robots may not harm humans. Because the law is encoded *in the hardware* there's no way that it can be altered.

Except that pretty well all of Asimovs stories were about how the 3 laws could be subverted by finding complex interactions that were not and could not be covered by the application of those simplistic laws

Re:The First Law of Robotics (2, Insightful)

Bakkster (1529253) | more than 4 years ago | (#29680997)

For example, the story about robots who prevented humans from coming to harm through inefficient human governance. Since they could not, through inaction, allow humans to harm themselves, they replaced the human government with robot governors.

They, for the record, did not welcome their new robot overlords.

Re:The First Law of Robotics (2, Interesting)

smoker2 (750216) | more than 4 years ago | (#29681159)

This meme has to stop. No his stories weren't about how to subvert the 3 laws. The stories were about how robots were used by humans, who manipulated the robots to perform malicious acts without breaking those laws. There is a subtle difference. And due to the diligence of Elijah Bailey, or Wendell Urth, the humans responsible were *always* caught because the 3 laws defined the behaviour of the robots in such a dependable manner.

Human interaction has laws too, but people can ignore them. Robots could never ignore the 3 laws. Breaking news - criminals don't care about laws ! Robots can not become criminals. The 3 laws stand as far as they go, which is to regulate the behaviour of robots. They were not designed to prevent the manipulation of robots by humans. Should we abandon the law against murder because it's trivial for a criminal to set things up so that when you open your front door a person gets blown up on the other side of town ? AFAIK, it's not illegal to open your front door.

The only murder case regarding a robot killing a man ended with the revelation that the man was in fact a robot. The 3 laws were preserved, as they are in all Asimovs stories.

Re:The First Law of Robotics (0)

Anonymous Coward | more than 4 years ago | (#29681303)

Robot "laws" are very dangerous. Any robot you see should be treated as hostile until proven otherwise, much like unknown data coming to your firewall is flagged as Untrusted until proven otherwise.

Why? Because rules such as "Robots cannot kill" and "Robots cannot lie" soften our expectations, to the degree that when one can kill, or one can lie, we do not think twice about such an act and apply disbelief that it could have ever occurred. You see these premises often in fictional works, where a single rouge device is able to subvert an entire culture who sleeps safely at night with the belief that it is impossible for such an act to happen.

Going beyond robots, we see this same level of faith applied to politicians, expecting them to somehow be elevated above being a normal human being, and become something greater once elected. Some people can at least see the reality, albeit post election, but there are those who strongly believe in the infalliblity of politicians unless a major scandal breaks out.

I digress too far, but the short takeaway from this message is that when you trust an unknown source by default, you WILL get burned. Its not a matter of if, but a matter of when. If you believe that computer memory is uncorruptable, and that the programmers making closed-source robots wouldnt put any extra code in for their own benefit, then you can live safely in ignorance, hugging your "three laws" when they come for you, too.

Industrial robots (3, Interesting)

Hijacked Public (999535) | more than 4 years ago | (#29680421)

All the early generation industrial robots were just as easily compromised. In fact, most all industrial machinery still is.

Luckily most of that is bolted to the floor. You can make those AGV forklifts do frightening things though.

Re:Industrial robots (1)

craagz (965952) | more than 4 years ago | (#29680677)

Don't have to be mobile to make some damage. Someone can program a robot on the assembly line of a car to screw some bolts loose which can lead to harm. My analogy is simplistic, but you get the idea.

Re:Industrial robots (0)

Anonymous Coward | more than 4 years ago | (#29680847)

That's a scenario, not an analogy

umm.... (1)

Random2 (1412773) | more than 4 years ago | (#29680423)

So, they're checking the security features on toys?

There's a pretty simple solution here: turn it off lock it up after you're done with it.

Re:umm.... (0)

Anonymous Coward | more than 4 years ago | (#29680527)

And you'd have thought to do that before you read this, right?

Re:umm.... (4, Funny)

cayenne8 (626475) | more than 4 years ago | (#29680585)

"There's a pretty simple solution here: turn it off lock it up after you're done with it."

And make sure and check the switch on the back...make sure it is not set to EVIL.

Re:umm.... (2, Funny)

techiemikey (1126169) | more than 4 years ago | (#29680953)

Oh man...how many times I ended up in trouble because the switch was accidentally set to evil. Frankly though, it's the chaotic/lawful switch you really have to watch out for. I once had a robot set to chaotic/evil and when I came home the all the windows were broken since it couldn't reach the doorknob, and all the furniture was on fire.

hacking (3, Interesting)

confused one (671304) | more than 4 years ago | (#29680427)

Are not these examples of toys, where the companies are actively cultivating the hacking community -- so, they want them to be hacked / hackable ?

Re:hacking (0)

Anonymous Coward | more than 4 years ago | (#29680495)

You're kidding, right?

They want the toys to be hacked in the traditional good sense of doing new and interesting things with them. They don't want the toys to be cracked (which is what the paper refers to) in the sense of some random person on the internet being able to hijack a toy robot that they don't have permission to use.

Re:hacking (4, Insightful)

Hizonner (38491) | more than 4 years ago | (#29680561)

They want you to play with them and make them do cool things. They don't necessarily want other people to drive up outside your house and use the robots' cameras and microphones to spy on you over WiFi. The problem is that the features that enable the first aren't secured, and therefore they can also be used to do the second.

Re:hacking (0)

Anonymous Coward | more than 4 years ago | (#29680867)

So really this has NOTHING to do with 'robots' and everything to do with wireless cameras and microphones being commercially available??

Re:hacking (1)

Bakkster (1529253) | more than 4 years ago | (#29681219)

There is a significant extra risk from robot cracking, due both to their mobility, standard expanded feature set, and the lack of attention given to their OS security.

A computer with a microphone is less dangerous because the Operating System has at least some security measures (unlike one a lightweight robot might use), computer microphones are not standard for a given computer (whereas a specific target robot's hardware will probably be known before the attack), and is limited to where the user places it (a comprimised robot would likely have some freedom of movement).

Of course, the mobility also allows for more potentially dangerous acts. Hacking grandma's computer so she can't play solitare is an inconvenience. Hacking her roomba so it knocks over her cane and causes her to fall could be potentially life-threatening.

Re:hacking (1)

confused one (671304) | more than 4 years ago | (#29681083)

If you're going to use standard wifi then there's no excuse not to use the available encryption; but, that only goes as far as the wireless router and depends on the consumer to correctly configure said router and all the devices on the network. Devices are typically shipped with the encryption turned off and entirely too many people either don't know how to or can't be bothered to set it up -- but that's not the fault of the device manufacturer.

If it's remotely accessable, you can password protect it; but, again, this requires the consumer to correctly configure the device(s). You can encrypt the connection (using ssh or similar) but you don't want it to be too hard to connect to (it's a toy).

If you're going to stream AV wirelessly and you want end users to hack it (which they do -- it's a toy), then the stream has to be unencrypted; or, if encrypted, either use a publicly available key or make it easily crackable (keep the honest honest).

Let's not forget, these examples are toys which the manufacturer wants to be easily used, made by companies actively cultivating hacker/modder communities. I think the researchers may have picked poor examples for their study.

Re:hacking (1)

JerryLove (1158461) | more than 4 years ago | (#29681229)

Good security (in this instance) is about preventing access, not preventing modification where access exists. If you wrote the unbreakable control code for your robosapien, I could simply replace the control card itself to gain control.

For your WiFi exmample, preventing the ability to connect to the robot via WiFi (by use of encryption and authentication or the like)

Missing Tag (1)

osomoore (1446439) | more than 4 years ago | (#29680459)

This needs the "whatcouldpossiblygowrong" tag. Just think: we'll hire a former hacker inmate to program the jail's robo-security and before you know it we'll have a new Terminator movie on our hands.

I think people are safe... (0)

Anonymous Coward | more than 4 years ago | (#29680463)

but my cat is screwed.

Gee I dunno know...ever seen the Terminator films? (2)

Smidge207 (1278042) | more than 4 years ago | (#29680499)

With an Austrian accent ANY robot could be potentially dangerous. Even that gay ass AT&T free battery replacement robot with the rope skipping shit.

I'm not worried about RoboSapien (3, Insightful)

HangingChad (677530) | more than 4 years ago | (#29680515)

I'm more concerned about someone hacking a Predator or Reaper.

Re:I'm not worried about RoboSapien (1, Insightful)

Anonymous Coward | more than 4 years ago | (#29680653)

Parent is offtopic. Millitary grade UAV's aren't even in the same ballpark as Robosapien, et al. First of all they're not fully autonomous and second, security is NOT an afterthought in a UAV.

I think it's entirely ontopic (2, Insightful)

Anonymous Coward | more than 4 years ago | (#29681341)

GP isn't actually offtopic. This article is directly or indirectly about fear mongering. Pointing out that there are carnivorous child-eating lizards, but that they live on the other side of the planet, is ontopic for "Under the Bed Monster fears" because it's reality, and the more of it you connect to the less subject you will be to irrational fears.

Your post is similarly on topic, since the robots that we should seriously worry about are indeed well secured against hackers.

Spykee is too loud to "sneak" up on anyone, but despite this and the "hype phrasing" of the articles, we finally have robots that are capable of external abuse. Spykee could instruct a trusting child as to how to unlatch a gate and fall down the stairs. Rovio could wait patiently by the stairs and slide exactly under a falling foot at a critical moment. These things can be done today, over the Internet, from the other side of the world. While Usenet is still in operation, it's pretty clear that the police are not well equipped for catching telemurderers.

Now would seem a good time to consider the issue. If we're posting on /. we can probably set our WPA-Enterprise security and require ssh tunnelling to access our home networks. Less than 10% of the people buying these robots can say the same. The infantile geek attitude of "serves 'em right for not securing it" needs to be discarded.

We geeks of the world are a significant force in the robot-buying market. Without exception all my friends would ask me first if they were going to buy a robot. We should let the manufacturers know that we won't buy or recommend "hard to secure" bots for our homes. Robot makers are one group that would actually listen to us. And since the tools for doing it right are freely available (though cost money to integrate), it's not an unreasonable stretch for the manufacturers.

While it's obvious how computer-people make the world an incrementally better place, this is one places where taking on some principals could save real living breathing humans. Seems worthy of some effort.

Re:I'm not worried about RoboSapien (1, Interesting)

Anonymous Coward | more than 4 years ago | (#29680901)

Somebody took this week's episode of NCIS: LA a little too seriously.

yeah that always bothered me (1)

circletimessquare (444983) | more than 4 years ago | (#29681297)

i'm certain the air force is paying attention to security, but i'll bet the chinese and the russians and the indians and the pakistanis are paying very close attention too. examining communication protocols used and logging command and control signals sent and received to reverse engineer standard operating procedure, and perhaps engaging in espionage in the usa, spying on and stealing the crafts' code from its makers

the idea of course being so that they can shut these things down or turn them against their makers in the event of war with the usa. and then to simply sit on these secrets, and perhaps never use them. but as the case of abdul qadeer [wikipedia.org] demonstrates, these military espionage secrets sometimes wind up for sale to the highest bidder

al qaeda and the taliban will never have their own predator or reaper, but its not inconceivable for them to perhaps buy the hack necessary to simply send signals up there to turn around and fire on us servicemen instead

the irony of course is that the technologies that violent jihadists already use are the fruits of the sciences of open and tolerant societies. these sciences would never flourish in the types of societies religious fundamentalists wish to create. allah did not give them the means to wage war in the infidels, the infidels did

and most of iranian "advancement", such as their satellite or their nuclear program, is just tech stolen from the west and rebranded as blessings of the revolution. its good propaganda, but its not the truth, and its hard to hide the fact the west is always leading in science because the west's principles of more open and tolerant societies results in better scientific minds. to be a good scientist, you need to question everything, and in the islamic world, questioning some things is simply a path to your arrest and censure. you can't call yourself an advanced modern society when you have to steal your tech from other more tolerant parts of the world. sadly, the islamic world was in fact the basis for much of scientific advancement while the middle ages swallowed europe in barbarity. but its been a long time since then, and now barbarity is trying to swallow the islamic world

Does this really surprise anyone? (1)

webscathe (448715) | more than 4 years ago | (#29680543)

They found that security is pretty much an afterthought in the current crop of robotic devices.

That pretty much defines how security is thought of most of the time, it's why software is so easily compromised, and why even physical security is often easily broken through. Why do they expect it to be any different with robots? Not that that justifies it, it just doesn't surprise me.

Not dangerous unless it is a movie plot... (1)

leuk_he (194174) | more than 4 years ago | (#29680559)

In the near future, a police officer specializes in malfunctioning robots. When a robot turns out to have been programmed to kill, he begins to uncover a homicidal plot to create killer robots... and his son becomes a target. Magnum Pi in 1984 [imdb.com]

Re:Not dangerous unless it is a movie plot... (1)

ColdWetDog (752185) | more than 4 years ago | (#29680649)

You all don't see it coming do you? You trusting fools.

Symantec Total Security for Robots.

The system went online August 14, 2010 ....

No more dangerous than an un-hacked one (2)

davidwr (791652) | more than 4 years ago | (#29680579)

It doesn't matter if a robot is "pwned" by Dr. Evil or if it bought, paid for, and run by Dr. Evil - it's equally dangerous either way.

Everyone sing along now, robots are our friends [albinoblacksheep.com].

Re:No more dangerous than an un-hacked one (1)

Nadaka (224565) | more than 4 years ago | (#29680865)

Except that the one that is "pwned" is already strategically positioned inside your house. And you probably are not paying attention to your roomba while it snoops on you, or goes out of its way to vacuum up your valuables.

I, for one, am unafraid (2, Insightful)

Kell Bengal (711123) | more than 4 years ago | (#29680597)

It always amuses me when people worry about robots going wrong or turning on us, or being used by The Bad Guys of the Week to do us harm. I know a lot of very smart people who are involved in robotics research, and they will tell you that making robots do anything is hard. Making robots do something with surreptitiously poisoned programming would be even harder. Seriously,

if you're smart enough to remotely modify a robot's code to do something usefully nefarious, you're smart enough to sell a usefully nefarious to the government for megadollars.

There's a lot more money to be made will legitimate killbots. It might be nice to protect robots from script kiddies who just want to throw a spanner in the works but until robots are ubiquitous enough that domestic cybernetic terrorism becomes attractive (ie, doing it for the lulz) I don't think we need to be overly worried now.

That said, now -is- the time to be thinking about these things so that we're ready before we get to that point. Thinking, but not worried.

Re:I, for one, am unafraid (1)

drinkypoo (153816) | more than 4 years ago | (#29680689)

if you're smart enough to remotely modify a robot's code to do something usefully nefarious, you're smart enough to sell a usefully nefarious to the government for megadollars.

False. The robotics researchers have done the work to find out how to make the robot do things. If you just change the order they do things in, then you can create potentially hazardous conditions. Remember, robot applies not just to the giant hockey puck that vacuums your kitchen floor but not your carpets, but also to self-driving dump trucks and the like. Changing it from "wait until the other vehicle passes, then turn left" to "wait until it's unsafe, then turn left" you have potentially committed murder (the vehicles are not automated at their endpoints, usually) or at least done massive amounts of property damage.

There's a lot more money to be made will legitimate killbots.

That market is not open to everyone; in fact, it's not really open to anyone. Those contracts are awarded. They're not really competed for. Remember when the U.S. Army was testing dragon skin armor vs. a new revision of the existing flak protection? They mucked up the test to cause the dragon skin to fail in spite of being lighter and better protection for our soldiers. The guy who ran the test resigned to become a head of the company which "won" the test almost immediately thereafter. Too bad about all those lives lost for his profit, though. The point stands, and is proven: you have to be essentially a made man before you can get a baby-killer grant.

Re:I, for one, am unafraid (1)

Nadaka (224565) | more than 4 years ago | (#29680917)

I seem to recall that the glue used to hold the dragon skin scales was water soluble and in long term tests the scales eventually migrated to the bottom of the vest?

"making robots do anything is hard" (1)

circletimessquare (444983) | more than 4 years ago | (#29680965)

yes, you are correct that it is not conceivable a hacker would reprogram a robot with entirely new PhD thesis level code that took months to write just for a prank. but a wartime enemy or a well-paid industrial saboteur might for the purposes of seriously destructive intentions

furthermore, an effectively dangerous hack might be nothing more than instructing a robot to do nothing when it is supposed to be doing something, to simply erase or freeze the robot. hitting the off switch remotely is orders of magnitude easier conceptually than writing novel code. so even the benign prank-intentioned hacker might create a life-threatening situation if that robot is depended upon to do something vital. which is usually the case, for someone to invest a function to the care of an expensive robot, its probably important

How dangerous would a hacked robot be? (2, Funny)

Onyma (1018104) | more than 4 years ago | (#29680613)

That depends on the size of the robot. I'm thinking a hacked Aibo is not much of a threat. Something the size of the Stay Puft Marshmallow man... that's a whole different kinda problem.

Re:How dangerous would a hacked robot be? (1)

natehoy (1608657) | more than 4 years ago | (#29681287)

Well, it all depends on your definition of "threat". The physical threat posed by an Aibo or Roomba is pretty low, unless it manages to somehow trip me up or expose wiring in my house or something. I suppose it could be used to start a fire if the materials were somewhat accessible to it, or something like that. However, physical threat is not the only issue.

If I buy a toy robot with WiFi and a webcam so it can patrol my house when I'm gone based on my remote controls, that's all well and good, but if someone took control of that same robot remotely and managed to watch me type in a banking password or caught me doing something embarrassing and put the video on my company's intranet site, that could be considered a "threat". Or if they simply had the robot park its shiny hiney in front of my shred pile and start "reading" everything in it while I was away, or walk through the house making note of motion detectors and alarm system controls for a possible future "personal visit" to deprive me of a few goods.

Web-controlled devices have a number of threat/interception vectors. Someone could intercept the WiFi signal used for local control. Someone could intercept the HTTP request or simply log into the control page from the Internet if they can get the password. Once someone has control over that device, they are basically an intruder in my house (albeit with limited physical mobility, poor vision, etc).

Rhetorical Question (2, Insightful)

s31523 (926314) | more than 4 years ago | (#29680623)

Did we really need to research this, we know the answer... VERY! Of course, this depends on the robot of course.

Robot A is tasked with going into a nuclear reactor and removing spent fuel rods. If Robot A is hacked and re-programmed to smash the shit out of the reactor, this might be dangerous.

Robot B is tasked with preventing people from entering into an access point in a secure building by 'restraining' them. If Robto B is hacked and re-programmed to 'hack' the people at random then this might be dangerous.

Hacking a roomba to spell your name in the carpet is not dangerous... It is all about what the level of responsibility of the robot is. It is funny that we needed research on this.

Re:Rhetorical Question (1)

ledow (319597) | more than 4 years ago | (#29680779)

On the subject of capabilities:

Just to take a simple example... take over a household robot (assuming it has visual capabilities and/or some method to manipulate objects, even tiny ones)... steer it towards the spare house keys, have it drop them outside the house. Now you have a perfect break-in and the homeowner aren't covered by insurance (no forced entry). Have it read the letters on the table or dropping through the letterbox (bank statements, etc.). Use it to spy on your neighbours when you hear them having sex. Have it cause a fire (plug something metal into an electric socket, for instance, or something which is likely to cause sparks/fire).

The worst vulnerabilities are the ones that people say "Oh, but what use is that?" because it means they haven't thought what use it would be. Yes, this stuff probably is more useful as a student prank than an attack on a homeowner, but it can still be used to remotely control a device in your home which has capabilities you wouldn't give a stranger - the ability to look inside your house and manipulate objects there.

The *chances* of anything happening are very minimal (much more likely that someone just smashed your window and enters your property themselves) but it's yet another implication, yet another weakness. What if people find you can hack them to be public wifi access points? There's always a risk with anything computer controlled and the reason a lot of people prefer their systems to be wired, autheneticated and secure is because even the most silly capability (opening a document in a word processor) can easily be turned into a powerful vulnerability.

Imagine you're a really nasty piece of work and want your neighbour to die with no trace back to you. Take over a household robot, cause a fire... Oh, dear... product failure.

Far fetched, yes, something easily fixed from day one as a sensible precaution anyway - certainly should have been.

Re:Rhetorical Question (1)

s31523 (926314) | more than 4 years ago | (#29680895)

Agreed! Maybe we do need research; a contest on who can turn the most "innocent" robot into something sinister with a prize going to the most sinister robot. Only then will we know our true risk and get attention paid to the subject.

Re:Rhetorical Question (0)

Anonymous Coward | more than 4 years ago | (#29680927)

"Hacking a roomba to spell your name in the carpet is not dangerous... It is all about what the level of responsibility of the robot is. It is funny that we needed research on this."

No, but hacking it to spell out your mistress's name could lead to castration or other bodily injury, if not worse. Death by roomba; a possibility!

better pranks through robotics (1)

circletimessquare (444983) | more than 4 years ago | (#29680659)

so in the future, pranksters could repeat the "caution! zombies ahead!" traffic sign hack, and expand the prank by actually delivering on what the traffic sign is warning about. awesome

http://www.statesman.com/blogs/content/shared-gen/blogs/austin/austin/entries/2009/01/28/sign_hacker_broadcasts_zombie.html [statesman.com]

sigourney weaver's voice repeatedly warning "caution, rogue robots" after the robots escape from the psych ward in the movie wall-e doesn't seem so far off in the future anymore

Toy Maker (1)

kenp2002 (545495) | more than 4 years ago | (#29680679)

A simple but functional roomba makes for a perfect mobile landmine. Hide under a car then run out at the opportune time.

A compromised robot can become a lethal, disposable, and potentially untraceable WMD.

Teddy that taking bear deal could easily be compromised to issue malicious voice commands that, given someone foolish enough to use voice command on a computer and leave it unlocked when away, could be used to download malware.

A robosapien that has been compromsed could easliy be tasked to go into the kitchen sink area and spill as many bottles of liquids as possible. What are the odds of some ammonia and chlorine products getting mixed?

Any automated critter could easily scurry out into 70mph traffic triggering an accident...

The list goes on... Just watch a Batman episode with the toy maker... creepy...

Difficulties? (1)

Adustust (1650351) | more than 4 years ago | (#29680753)

It's unfortunate that the only robots they had a chance to test were toys. I'm sure the government wouldn't want any studies done on the security of their robots yet. Still, toys are not a good subject to make a point with. They're for kids (or young adults I suppose), and the conditions and effort required to make this work and be of any value to the hacker is immense. They have to know that their target is worth spying on. they have to know there's a robot inside the house, or get one inside. (If you know this much, can't you just steal what you need anyway?) Sniff through network traffic to find an access code. Roll a robot around the house without anyone noticing. I've never had any of these, but I've seen quite a few. They're pretty bulky and not at all subtle. As far as the sound they make, I'm not sure but I'd assume that they're not very stealthy, are they? You're better off sitting in the kiddie pool out back with binoculars trained on the bathroom window.

We've learned this lesson already... (2, Interesting)

gandhi_2 (1108023) | more than 4 years ago | (#29680793)

...with networked printers.

Sometimes, it can be trivial to print a few hundred pictures of dicks to an IP printer on someone elses network. Or http or telnet into the printer and wreck all kinds of havoc, or just print a ream of test pages. Or use the MFP's fax function for moar great pranksterism. Maybe get a copy of the last x scans....

Of course, years of ubiquitous networked printers have yielded us "some serious attention to the question of" MFP security. Oh...nope? Don't expect much for robots.

Three words: (0)

Anonymous Coward | more than 4 years ago | (#29680839)

Just three words come to mind:

LOST IN SPACE.

Nuff said.

they are forgetting X10 (0)

Anonymous Coward | more than 4 years ago | (#29680843)

It is also easily hacked and it does robotic acts. I stopped using it because of it. I'm wanting for an open sourced encrypted version of remote controlling devices but don't expect to see anything any time soon. I could make my own with gumstix for about $230 per device and that may be the only way to be secure considering none of the commercial stuff is willing to let us know there encryption and with closed source encryption why should we trust it the might have back doors.

Am I the only one who thought "Phalanx"? (1)

MartinSchou (1360093) | more than 4 years ago | (#29680869)

That was my first thought: "How dangerous could a hacked 20 mm Gatling gun firing upwards of 4,500 rounds per minute be?" Very!

How dangerous? (0)

Anonymous Coward | more than 4 years ago | (#29680875)

3 words: klaatu baratu nikto

Guns don't kill people, Robots kill people (0)

Anonymous Coward | more than 4 years ago | (#29680883)

... and guns kill robots. And so do stamping machines.

"on" ....how it miss it, and by "it" I mean "on". (0)

Anonymous Coward | more than 4 years ago | (#29680995)

I think you'll find that it was "In a paper published on Thursday". In the same way that when a person said "Thursday" they said the word "Thursday" and when they said something "on Thursday" they said something. The use of the preposition "on" stops you sounding as if your reading an autoque on CNN and the sentence is easier to parse. Its always better to say something when you've something to say.

Tell me something new (1)

Krneki (1192201) | more than 4 years ago | (#29681025)

I have compromised all the printers at my work. Now instead of "Ready" they say random stuff.
And only because I'm good, I don't do anything more nasty with them.

Yeah I know, I'm bored. But it seems people like more "human" printers and no one complains.

Now I need to move to the coffee machine. :)

Sounds like the '80s movie "Runaway"... (1)

Rexifer (81021) | more than 4 years ago | (#29681073)

"Runaway" was a near-future movie about a police division that dealt with hacked robots that starred Tom Selleck and was written and directed by Michael Crichton (of Jurassic Park fame). The thrust of the main plot was about a terrorist that devised a hack that turned service robots into killing machines...

Military Drones (1)

nasdaq (81135) | more than 4 years ago | (#29681185)

There have been a couple instances recently where the Predator drones have been malfunctioning in odd ways. They have programming that when they loose communication with the controller they return automatically to base. These have been running away to less than convenient locations. Either on their own or hijacked they haven't said exactly.

Couple hellfire and sidewinders could do a hell of a lot of damage.

Be scared.

about:robots (1)

wiredog (43288) | more than 4 years ago | (#29681259)

        * Robots may not injure a human being or, through inaction, allow a human being to come to harm.

        * Robots have seen things you people wouldn't believe.

        * Robots are Your Plastic Pal Who's Fun To Be With.

        * Robots have shiny metal posteriors which should not be bitten.

And they have a plan.

           

Load More Comments
Slashdot Account

Need an Account?

Forgot your password?

Don't worry, we never post anything without your permission.

Submission Text Formatting Tips

We support a small subset of HTML, namely these tags:

  • b
  • i
  • p
  • br
  • a
  • ol
  • ul
  • li
  • dl
  • dt
  • dd
  • em
  • strong
  • tt
  • blockquote
  • div
  • quote
  • ecode

"ecode" can be used for code snippets, for example:

<ecode>    while(1) { do_something(); } </ecode>
Sign up for Slashdot Newsletters
Create a Slashdot Account

Loading...