Beta
×

Welcome to the Slashdot Beta site -- learn more here. Use the link in the footer or click here to return to the Classic version of Slashdot.

Thank you!

Before you choose to head back to the Classic look of the site, we'd appreciate it if you share your thoughts on the Beta; your feedback is what drives our ongoing development.

Beta is different and we value you taking the time to try it out. Please take a look at the changes we've made in Beta and  learn more about it. Thanks for reading, and for making the site better!

Eben Moglen: Time To Apply Asimov's First Law of Robotics To Smartphones

Soulskill posted more than 2 years ago | from the or-through-inaction-allow-texting-at-the-dinner-table dept.

Cellphones 305

Sparrowvsrevolution writes "Free software lawyer and activist Eben Moglen plans to give a talk at the Hackers On Planet Earth conference in New York next month on the need to apply Isaac Asimov's laws of robotics to our personal devices like smartphones. Here's a preview: 'In [1960s] science fiction, visionaries perceived that in the middle of the first quarter of the 21st century, we'd be living contemporarily with robots. They were correct. We do. We carry them everywhere we go. They see everything, they're aware of our position, our relationship to other human beings and other robots, they mediate an information stream about us, which allows other people to predict and know our conduct and intentions and capabilities better than we can predict them ourselves. But we grew up imagining that these robots would have, incorporated in their design, a set of principles. We imagined that robots would be designed so that they could never hurt a human being. These robots have no such commitments. These robots hurt us every day. They work for other people. They're designed, built and managed to provide leverage and control to people other than their owners. Unless we retrofit the first law of robotics onto them immediately, we're cooked.'"

Sorry! There are no comments related to the filter you selected.

Encyclopedia Galactica (5, Insightful)

the_povinator (936048) | more than 2 years ago | (#40454765)

This kind of phone will be like the Encyclopedia Galactica of phones. Much better than the standard phone (i.e. the Hitchhiker's Guide), but slightly more expensive, a bit boring, and nobody will buy it.

Re:Encyclopedia Galactica (5, Interesting)

crypticedge (1335931) | more than 2 years ago | (#40454843)

I would. I hate how every app I download on my android phone requires access to my contacts, phone state, text messages and a dozen other things a non internet enabled app asks for. Why does a game need to know who my contacts are? It's a single player game, not an online social game. Why does a game require my text messages? Why does it require my GPS location?

It doesn't. We need to revolt against the idea that we are the product and the item we buy is simply a tool they use to spy on us.

Re:Encyclopedia Galactica (5, Insightful)

Anonymous Coward | more than 2 years ago | (#40455151)

Stop buying those games. Stop downloading the free crap that really isn't free-it's just not being charged for in a currency you recognise.

Re:Encyclopedia Galactica (1)

h4rr4r (612664) | more than 2 years ago | (#40455283)

Stop buying or installing such apps.
Lots of games do not require such things.

Re:Encyclopedia Galactica (5, Insightful)

0123456 (636235) | more than 2 years ago | (#40455555)

I have a better idea: fix the OS to allow users to deny individual permission to applications.

Of course Google won't do that because then they might not be able to track you so well for their targeted advertising.

Re:Encyclopedia Galactica (1)

Jeng (926980) | more than 2 years ago | (#40455293)

Are these free apps or paid for apps that you are complaining about?

Got examples?

I do agree though that the "why" should be fully explained, it tells you what permissions it needs, but it does not tell you why they are needed.

Re:Encyclopedia Galactica (1)

Politburo (640618) | more than 2 years ago | (#40455567)

Under such a system I don't think it would be much different in reality. The explanation could simply be a false front.. either way you have to trust the developer or not use the app.

Re:Encyclopedia Galactica (1)

Jeng (926980) | more than 2 years ago | (#40455727)

False front or not, there should be an explanation and just because someone can lie when they write the explanation that does not mean that all the explanations are going to be lies.

Given enough time and exposure the people giving the false front will be found out.

Re:Encyclopedia Galactica (0)

Anonymous Coward | more than 2 years ago | (#40455453)

Easy solution: Send the application spoofed data.

Every app YOU downloaded (5, Insightful)

SmallFurryCreature (593017) | more than 2 years ago | (#40455789)

YOU downloaded those apps, the phone just executed the command YOU gave it. Should your phone override your commands? Decide on its own what is best for you?

The entire article is insane. You should NEVER take a fictional book and use it as fact. Asimov was not a programmer or OS designer, he was a writer and he used artistic license to suggest a theory, a point from which to start discussion perhaps but not an accurete blueprint for a certain future.

There is no place in a modern OS for Asimov rules of robotics.

First off, our computers have no self determination whatsoever. The idea behind Asimov's robots is that they are "born" and then guide themselves with at most human like instructions to give them direction. How they are programmed, patched etc etc, doesn't become clear in those stories, because it doesn't matter for the story. But it does matter in real life.

How would getting root on a Asimov robot work? What if you as the owner insisted to install a utility/app that would perhaps cause it to violate its rule sets? What if an update removed those rules?

How would your phone even know this? It should be able to somehow analyse any code presented to it, to see if it doesn't override something or a setting has a consequence that would violate the rules? There is no way to do this. How would you update a robot that has a bug causing it to faultily see an update as a violation while in fact its current code is in violation?

The sentient robot is a nice gimmick but it is nowhere in sight in our lives.

Androids install warnings tell you exactly what an app needs. If you don't want to give those permissions, don't install it.

No need for magic code, just consumer beware. Any sentient should be able to do that. That you are not... are you sure you are human? Or are you just a bot dreaming he is human?

Three Laws (5, Informative)

SJHillman (1966756) | more than 2 years ago | (#40454791)

To those who don't remember, Asimov's Three Laws are:

A robot may not injure a human being or, through inaction, allow a human being to come to harm.
A robot must obey the orders given to it by human beings, except where such orders would conflict with the First Law.
A robot must protect its own existence as long as such protection does not conflict with the First or Second Laws.

Re:Three Laws (5, Informative)

Daniel_is_Legnd (1447519) | more than 2 years ago | (#40454861)

And the zeroth law: A robot may not harm humanity, or, by inaction, allow humanity to come to harm.

Re:Three Laws (-1)

Anonymous Coward | more than 2 years ago | (#40455173)

I always thought that was an interesting loophole in the first law. The mention of "a human" seems to imply harming 2 or more humans would be totally alright, so long as you hurt them both at the same time. Then you would not be harming "a human" but rather "humans".

Seems like just the logical loophole a rogue AI might devise.

Re:Three Laws (2)

Immerman (2627577) | more than 2 years ago | (#40455435)

That's a pretty shaky loophole - and considering the whole point of the laws was to inspire interesting loophole-driven stories that's saying something. If you harm several humans then despite any semantic arguments there are still several cases of "a" human being harmed.

Re:Three Laws (2)

eternaldoctorwho (2563923) | more than 2 years ago | (#40455495)

The robot would still be harming "a human", but would be doing that twice (albeit simultaneously). At least, I would hope a robot would view humans a discrete entities. The zeroth law is designed to prevent robots from harming an abstract group of humans, or even some beings representing humanity.

To me, the real loophole is defining what a human is. If humanity is only considered in the sense of "20th century homo sapiens with a specific gene sequence", then we get into more of a problem as time goes on. A single robot will keep on going, theoretically, whereas humanity will grow and change and adapt and evolve. Will WW3 be the Robot Eugenics War?

Re:Three Laws (0)

Anonymous Coward | more than 2 years ago | (#40455497)

Change law #1 to "any human or humans" and it closes the loophole.

Re:Three Laws (1)

jzuccaro (1234644) | more than 2 years ago | (#40455707)

In one of his works, Asimov introduced another loophole: since, in that universe at last, it was impossible to build a robot free from the 3 laws if you wanted a robot to hurt somebody you just taught it a biased notion of what a "human" is. In that case, the robots where taught that only people that spoke with a particular accent where humans.

Re:Three Laws (4, Funny)

ColdWetDog (752185) | more than 2 years ago | (#40454881)

If you apply the first law to my smartphone, it would basically turn itself off and short the battery.

That might be an overall improvement, but I don't think it would be a terribly popular move.

Re:Three Laws (1)

Noughmad (1044096) | more than 2 years ago | (#40455211)

If you apply the first law to my smartphone, it would basically turn itself off and short the battery.

That would constitute "through inaction, allow a human being to come to harm". The phone knows that there's a possibility you'll be hurt somewhere remote, where your only hope is a call for help.

Re:Three Laws (5, Interesting)

Anonymous Coward | more than 2 years ago | (#40454917)

A phone may not reveal a human's address or, through inaction, allow a human being to be spammed.
A phone must obey the orders given to it by human beings, except where such orders would conflict with the First Law.
A phone must protect its own IP address as long as such protection does not conflict with the First or Second Laws.

Re:Three Laws (1)

eternaldoctorwho (2563923) | more than 2 years ago | (#40455527)

If I had not already contributed to this discussion, I would mod you up.

Re:Three Laws (1)

ZeroPly (881915) | more than 2 years ago | (#40455795)

So if I want my phone to chill while I test spam myself, and explicitly tell it to do so, it will politely refuse in order to save me from myself?

Do you work for Apple by any chance?

Re:Three Laws (1, Troll)

cob666 (656740) | more than 2 years ago | (#40454951)

As in I, Robot. The robot was able to differentiate between the well being of the one against the well being of the many and caused harm to the one.

Similarly, our 'robots' harm the one (the owner) for the benefit of the many (the corporate overlords and the minions that thrive off the aggregated data supplied to them by our little robots).

Sorry, it doesn't work that way (1)

Moraelin (679338) | more than 2 years ago | (#40455687)

Sorry, utilitarianism, because that's what it's all about, works at the scale of society. You don't get to gerrymander the groups arbitrarily to justify any kind of antisocial behaviour.

For a start, if you have a hundred million people preyed upon, you count a hundred millions, you don't do something as idiotic as counting each person as one injured for the benefit of a whole corporation. Even taking the short-sighted view that ignores collateral damage, you have to count some hundreds of millions on one side, vs a corporation of... what? A few thousand employees? Tens of thousands?

To see what's wrong with it, your exact same logic can be applied to a mafia don and his gangsters, extorting a few thousand shopkeepers. And occasionally, sadly, having to kneecap someone or fit them into cement shoes, to keep the others in line. Each individual victim is one victim, and their unwilling contribution is keeping a couple dozen gangsters fed, clothed and armed. So, you know, one versus many.

Except, as I was saying, it doesn't work that way. Even the most myopic view has to count both sides as a group. You have some thousands of people preyed upon, for the benefit of some dozens of gangsters. The utilitarian conclusion is to get rid of the gangsters, not to tell the victims that they had to put up with it because, you know, the good of the one vs the good of the many.

But even that's not taking into account other effects, which negatively affect the well being of more people than the thousands of extorted shopkeepers. E.g., the negative effect on the local economy. E.g., the fact that people have to fear of ending up being in the wrong pub when some gangster decides to machinegun it because it belongs to a rival gangster family. Etc.

Re:Three Laws (3, Interesting)

mcgrew (92797) | more than 2 years ago | (#40455011)

TFA says first law, I'd like to see it obey all three laws, except I'd make the second law "A robot must obey the orders given to it by its owner, except where such orders would conflict with the First Law".

I might think about a similar change to the first law, as well; change "a human being" to "its owner".

I loled at your moderation, the moderator must be some kid who's never read Asimov, seen STNG, or the movie I, Robot, or... well, for any nerd on earth, hiding in a cave. We slashdotters should be well aware of Asimov's laws.

BTW, another tidbit that everyone should know (and if you don't, why not?) is that Asimov coined the word "robotics".

If any of you really haven't read Asimov, get your butt to the library RIGHT NOW.

Re:Three Laws (4, Interesting)

rtaylor (70602) | more than 2 years ago | (#40455153)

TFA says first law, I'd like to see it obey all three laws, except I'd make the second law "A robot must obey the orders given to it by its owner, except where such orders would conflict with the First Law".

So same as today then? The phone company, which is the phones owner, gives a command and the phone obeys by turning in the carriers position.

Re:Three Laws (0)

Anonymous Coward | more than 2 years ago | (#40455307)

The phone company, which is the phones owner

What? Is it normal in america to have the phone company actually own your mobile phone?

Re:Three Laws (-1)

Anonymous Coward | more than 2 years ago | (#40455407)

In the days of rotary dial phones it was, but the person you replied to is likely using the FOSS definition of 'own', which stipulates that unless an item is provided to the end customer in a state that allows unfettered access to its internals (both hardware and software), it is then owned by the vendor rather than the customer.

For instance, an iphone is sold with no provision for installing software through any means other than itunes. The overwhelming majority of the world is OK with this,. But in freetard speak, that means Apple 'owns' all iphones everywhere.

Sound stupid? That is because you are normal.

Re:Three Laws (1)

Anonymous Coward | more than 2 years ago | (#40455013)

To those who don't remember, Asimov's Three Laws are:

A robot may not injure a human being or, through inaction, allow a human being to come to harm.
A robot must obey the orders given to it by human beings, except where such orders would conflict with the First Law.
A robot must protect its own existence as long as such protection does not conflict with the First or Second Laws.

Oh, this is complete horseshit. Here I thought I found the ideal loophole, and some busybody do-gooder has to come by and think of applying these laws to smartphones. Come ON, already! Can't a guy raise a homicidal army of SOME sort of technological automaton without someone ruining all my fun?

Re:Three Laws (1)

Dins (2538550) | more than 2 years ago | (#40455569)

Can't a guy raise a homicidal army of SOME sort of technological automaton without someone ruining all my fun?

Nope. Switch to a zombie horde and you won't have that problem...

Re:Three Laws (1)

hippo (107522) | more than 2 years ago | (#40455149)

I've always though the third law to be daft. No-one is going to buy a robot that might destroy itself.

And anyway how do you punish a robot that has not protected its own existence?

Re:Three Laws (2)

ACE209 (1067276) | more than 2 years ago | (#40455341)

And anyway how do you punish a robot that has not protected its own existence?

The same way you punish a successful suicide bomber.

Re:Three Laws (2)

amRadioHed (463061) | more than 2 years ago | (#40455401)

These aren't laws like traffic laws that have a punishment if the robot breaks them, they are laws like natural laws where the robot would be designed from the start to be incapable of not violating them. And who wouldn't buy a robot that might destroy itself to save you? It's kind of like cars, they are designed to take damage in order to protect the drivers. A perfectly rigid car would frequently be cheaper to repair, but the same could not be said for the person inside.

Re:Three Laws (0)

Anonymous Coward | more than 2 years ago | (#40455549)

designed from the start to be incapable of not violating them

I see you are some kind of mad robot scientist.

But what is a human? (1)

anss123 (985305) | more than 2 years ago | (#40455183)

Before one can implement these laws a computer must be able to determine what a "human being" is. Besides flawed heuristics we're not there yet.

Re:Three Laws (1)

mrsquid0 (1335303) | more than 2 years ago | (#40455329)

The problem with applying the first law to smartphones is the first and last clauses contradict each other. A smartphone app does not need to know much about the user in order to function well and not injure the user. However, a smartphone may need to know a lot about the user in order to prevent the user from coming to harm. Imagine if the smartphone has reason to think that its user has been injured (the motion sensor detects a sudden acceleration, the GPS indicates that the user is at an intersection, a reasonable conclusion is that the user was just hit by a car). Having access to large amounts of information about the user (health insurance, doctors, family data, etc.) could save the user's life.

Re:Three Laws (0)

Anonymous Coward | more than 2 years ago | (#40455581)

I don't remember this turning out so well in the "I Robot" book.

Tell that to US Gouv... (-1)

Anonymous Coward | more than 2 years ago | (#40454801)

They already have killer bots...

We're cooked. (0)

Anonymous Coward | more than 2 years ago | (#40454817)

The makers of these phones make them with the express intention of causing the very harm the author is talking about. This harm is not an accidental side-effect, it is a highly desired design feature.

Any phone that does not do this will be too expensive, will not have a sufficient marketing budget, will fall prey to patent litigation, and might even be made outright illegal (since the government is also keenly interested in the spying features).

Good luck trying to change the world.

Impossible (1, Insightful)

Dog-Cow (21281) | more than 2 years ago | (#40454831)

Asimov was writing about physical harm. Moglen is talking about financial or emotional harm (depending on what info is leaked and to whom). There is no practical way to incorporate the First Law to prevent this kind of harm. AI doesn't exist.

Re:Impossible (5, Insightful)

nospam007 (722110) | more than 2 years ago | (#40454965)

"Asimov was writing about physical harm. "

No, he was not.
Read 'Liar!' or 'Reason' for example.

Re:Impossible (2)

AliasMarlowe (1042386) | more than 2 years ago | (#40455263)

"Asimov was writing about physical harm. "

No, he was not.
Read 'Liar!' or 'Reason' for example.

Or "Satisfaction guaranteed", as yet another example.

Re:Impossible (1)

Beardo the Bearded (321478) | more than 2 years ago | (#40455289)

Or even I, Robot.

Re:Impossible (2)

Daniel_Staal (609844) | more than 2 years ago | (#40455403)

Which story? (As that's a collection of short stories...)

Re:Impossible (5, Informative)

charlesbakerharris (623282) | more than 2 years ago | (#40455001)

If you'd actually read Asimov, you'd know that emotional and financial harm would both have fallen under the same First Law umbrella as physical harm, in his canon.

Re:Impossible (0)

brainzach (2032950) | more than 2 years ago | (#40455297)

You can't clearly define emotional or financial harm.

Emotional harm can be caused by posting something embarrassing on Facebook. Financial harm can be caused from buying an app.

Re:Impossible (1)

charlesbakerharris (623282) | more than 2 years ago | (#40455333)

You haven't read Asimov either, have you?

Re:Impossible (0)

Anonymous Coward | more than 2 years ago | (#40455055)

asimov was also talking about robots, i don't really see how this label fits smart phones, there's little/no AI and no moving parts.

Re:Impossible (0)

Anonymous Coward | more than 2 years ago | (#40455429)

Imagined future and reality are often different, but they are basically the same thing. Ever read about telepathic people? I am now talking into your mind directly. See, it actually became true, just had to use an invention called the internet.

Re:Impossible (2)

Guy Harris (3803) | more than 2 years ago | (#40455669)

Imagined future and reality are often different, but they are basically the same thing. Ever read about telepathic people? I am now talking into your mind directly. See, it actually became true, just had to use an invention called the internet.

No, you're not. You're typing into your Web browser and clicking the "Submit" button, and your Web browser is sending your text over a TCP connection to the Slashdot servers, and the person reading your post is reading it in their Web browser, which has read your text over a separate TCP connection to the Slashdot servers. That is not "direct" by any sensible definition of "direct"; there's a lot of stuff between you and the reader.

Re:Impossible (1)

TuringTest (533084) | more than 2 years ago | (#40455621)

You don't need AI to have an automaton that does no harm. You just need a designer willing to create a design that avoids doing harm to the full designer's ability, intead of a designer creating a deliberately harmful desig.

As others have pointedout, so far the four freedoms of Free Software are the closest thing we have to Asimov's laws, because they're deliberately designed to protect the user.

Re:Impossible (0)

Anonymous Coward | more than 2 years ago | (#40455299)

It doesn't, and Eben Moglen is just manipulating the definition of 'doing harm' to fit his worldview of how software and presumably hardware should work.

They take our money.

When has a cellular phone (or a robot vacuum) taken anyone's money? People sign up for contracts, then overrun the clearly stated limits or incur ridiculously punative roaming charges. Some people get scammed by downloading malicious software. How is this unique to phone of vacuum cleaners? It has been happening with computers since they moved into homes. Before that it happened by snail mail. Before that, face to face.

They take our autonomy. They spy on us.

I can't dispute the latter apart from from qualifying it with the fact that, outside the same sort of malicious software we've always dealt with, phones tend to notify users of these kinds the spying. We might not read or understand it, but they do it for reasonably legitimate purposes.

And around the world, they result in our arrest, beating, torture.

I can't possibly blame a telephone for that. If some government somewhere tortures a person because they brought a telephone to an event, 100% of the responsibilty falls on the goverment.

Or they paid too much for something because the seller knew they would.

As above, phones did not bring this on anyone. Back when my parents had a roatry dial phone, I paid too much for a bicycle. It sucked, but I learned a lesson. But, if anything, ubiquitous internet access should help you not pay too much for things. Anywhere there is a signal a person with a smartphone can look up the price of just about anything. You can also verify the authenticity of quite a few commonly counterfeitted items.

This whole thing is just hype to get himself some front page time.

Re:Impossible (1)

ceoyoyo (59147) | more than 2 years ago | (#40455673)

No, he wasn't. But Asimov WAS writing about AI. My smartphone isn't really smart. If Moglen has one that is, and is able to make complex moral decisions, I'd like to see it.

What we need is for more people to NOT take the spyware enabled contract phone from the carrier and not use free-app-in-exchange-for-spying software.

First Law? (4, Funny)

Cro Magnon (467622) | more than 2 years ago | (#40454837)

I'm still trying to get the Second Law.

Do what the $#! I told you, you stupid !@#$!

Re:First Law? (1)

ColdWetDog (752185) | more than 2 years ago | (#40454923)

I'm still trying to get the Second Law.

Do what the $#! I told you, you stupid !@#$!

"I would blush if I could"

(Hint: Do not try this with Siri)

Lolwut? (4, Insightful)

neminem (561346) | more than 2 years ago | (#40454851)

The three laws of robotics were designed for thinking machines, that could intelligently -determine- what a human was, and whether an action it was thinking of taking would hurt any humans or allow them to come to harm through inaction.

I know they're called "smart" phones, but I don't think they're really quite that smart. Nor, really, would I want them to be.

Re:Lolwut? (1)

WoOS (28173) | more than 2 years ago | (#40455287)

Yes, this is the real problem of the argument about the robotic laws: It requires a (self)-concious being to execute. If we ever have such electronic things, there might be other problems [wikipedia.org] .

Also Moglen's arguments are very much centered around privacy. But Clarke has explored in his stories and novels many situations where harm was coming from unexpected directions so it would be imaginable that a real smartphone with the three laws implemented might reduce the privacy of his owner if it thought that it was harmful for the owner. And, as Clark demonstrated, the owner would not necessarily agree.

So while it is nice food for thought, the three laws don't really have anything to do with Moglen's privacy agenda. There it is more the first law human interrelationship: Don't harm your neighbours (and by extension also your customers).

Re:Lolwut? (1)

TheRaven64 (641858) | more than 2 years ago | (#40455339)

That's largely irrelevant. We can determine categories of harm. Jef Raskin proposed that the three laws should be applied to user interface design, for example making the first law into 'A program may not harm a user's data, or through inaction allow a user's data to come to harm.' The software doesn't need to be sentient to autosave and persist the undo history and a well-designed framework can make this the default for developers. Similarly, an operating system can restrict what an application can do so that it's difficult (and easy to spot) if an application tries to disclose personal data over the network.

Re:Lolwut? (1)

Daniel_Staal (609844) | more than 2 years ago | (#40455511)

Asimov actually deals with that in one of his stories: They need to design a small and cheap robot to (re)introduce the public to the idea of robots, but the three laws make them too complex. They realize a disposable (not worth enough for the third law), single-purpose (not adaptable enough for the second law), and small (not dangerous enough for the first law) could be built without the protections, and still work.

Personally, while I like the discussion idea of the three laws, I tend to think the order is wrong: The first law needs to be the second. (And, in this context, the owner's orders and safety need to be more important than anyone else's.)

typing and texting = injuries (1)

cod3r_ (2031620) | more than 2 years ago | (#40454867)

these phones are killing us off one by one.

What about the Second Law? (4, Informative)

Anonymous Coward | more than 2 years ago | (#40454871)

Pessimistic prediction of future rules of robotics:

Rule -1: A robot may not permit, and must actively prevent, a human breaking any law or government regulation.
Rule 0: A robot must prevent a human from copying or making fair use of any copyrighted work that has DRM applied to it.
Rule 1: A robot may not harm a human, or through inaction allow a human to be harmed, unless it would contradict Rule 0 or Rule -1.

I'd prefer my computers to put the second law above all others:

A robot must obey the orders given to it by human beings, except where such orders would conflict with the First Law.

That's why I prefer Free software. An electronic device should always follow the commands and wishes of its owner.

  • --No DRM or other rules that prevent me from using my multimedia and documents the ways I want.
  • --No intentionally annoying programs.
  • --No artificial software limitations to get you to upgrade to the "enterprise version".
  • --No privacy violating tracking systems.
  • --No locked user interfaces that prevent scripting and automation.

If Free software does something other than my will, it's because of a bug. If proprietary commercial software does something other than my will, it's usually behavior intended by the manufacturer.

Re:What about the Second Law? (0)

Anonymous Coward | more than 2 years ago | (#40455503)

Optimistic prediction of future rules of robotics:

FTFY.
-- A Pessimist

Yep we are cooked. (1)

BetaDays (2355424) | more than 2 years ago | (#40454873)

This can't happen, the phones have on consciousness.

Re:Yep we are cooked. (0)

Anonymous Coward | more than 2 years ago | (#40455107)

Phones have the consciousness of the humans that have hi-jacked them and watching your every move.

"We" didn't imagine that.... (0)

Anonymous Coward | more than 2 years ago | (#40454877)

... one guy imagined that and thought he could make it happen by writing about it. Good luck with that.

Then what? (0)

Anonymous Coward | more than 2 years ago | (#40454895)

I suppose this guy wants law enforcement to get a warrant whenever they want to spy on someone. Absurd! People don't realize that tyranny was only unpopular back when the U.S. was formed because it was damned inconvenient and expensive to spy on everyone back then.

Mine does, sort of (0)

plover (150551) | more than 2 years ago | (#40454941)

(from memory) 1. A robot may not, through action or inaction, allow a human to come to harm. My phone lets me dial 911 even if the bill is unpaid.
2. A robot must do what a human commands, within the bounds of the first law. If I pay the bills, it makes calls, gives me data, etc.
3. A robot must preserve itself, within the bounds of the first two laws. Well, it shuts off apps as the low battery approaches, preserving the remaining power for potential emergency calls, or my explicit use.

What am I missing? Is there a right to free data plans? Unwalled gardens? Calling plans that don't defy rational explanation? They're not "laws".

Society must hold the corporations responsible. (0)

Anonymous Coward | more than 2 years ago | (#40454957)

It costs a lot of money to add this functionality to robots. In addition, many people don't know how to do such a thing.

For example: How do I tell a robotic arm not to hurt itself? How do I tell a robot that it will allow a human to come to harm through inaction?

Can we sue Microsoft because our Windows phone didn't call 911 when we got into a car accident?

We didn't "imagine" anything about 3 Laws (4, Insightful)

cpu6502 (1960974) | more than 2 years ago | (#40455003)

Because we're not stupid. A robot in Asimov's stories uses a positronic brain, copied after an animal's neuronic brain with millions of connections between thousands of cells, and therefore the robot has its own intelligence & decision-making ability. The Three Laws were the functional equivalent of "instinct".

In contrast a modern phone is nothing more than a bunch of switches: Either on (1) or off (0). It has no intelligence, but merely executes statements in whatever order listed on its hard drive or flash drive. A modern phone is stupid. Beyond stupid. It doesn't even know what "law" is.

Re:We didn't "imagine" anything about 3 Laws (2)

TheGratefulNet (143330) | more than 2 years ago | (#40455147)

modern unrooted phones have 2 masters: a primary master (not you) and its secondary master (you).

I own a smartphone but I have not rooted yet (yet). the fact that I'm not really in control over it, even when installing the bare min of apps, is what keeps me from even turning it on at all.

I toyed with it, gave it a chance, felt creeped out by it all and blew it off.

I do plan to root it but its not a big prio; as having a phone 'always on me' is not a high enough prio, either.

but the way it is now, its a huge turn-off. I leave it turned-off, in fact ;)

lots of people just look the other way or don't care. but the current state of non-rooted phones just creeps me out, sorry..

Re:We didn't "imagine" anything about 3 Laws (0)

Anonymous Coward | more than 2 years ago | (#40455719)

This is of course just freetard stupidity. My cell phone no more has 2 masters than my toaster does.

You bought the thing knowing full well that, out of the box, you would not be able to install any software you like. Just like I bought my toaster knowing it had bagel toasting setting. If you are indeed creeped out it is due to your own stupidity rather than the phone having two 'masters', you fucking weirdo.

You could easily root your phone just like I could easily install a switch to shut off one side of my toaster. Neither of us care to take the time. Which says something about how we prioritize the 'problem', but nothing about the manufacturers of our respective devices. Rooted phones are a 'low prio' for phone makers like bagel toasters are for toaster factories, because not many people give a fuck to have it the other way.

Re:We didn't "imagine" anything about 3 Laws (0)

Anonymous Coward | more than 2 years ago | (#40455561)

Who says people aren't the same? When a neuron fires in our brains that would be 1 - on, or no fire would be off. Really, we're just a very complex computer system. A computer or smartphone isn't limited to a very basic set of instructions. Perhaps one day soon computers will have the ability to consider themselves as we do and imagine original ideas. After all, isn't anything we do or imagine just a collage of many simple statements? Isn't a decision ultimately a result of an input from our environment and processed by our minds?

Re:We didn't "imagine" anything about 3 Laws (1)

Bob9113 (14996) | more than 2 years ago | (#40455697)

Either on (1) or off (0). It has no intelligence, but merely executes statements in whatever order listed on its hard drive or flash drive. A modern phone is stupid. Beyond stupid. It doesn't even know what "law" is.

I've written advertising targeting software that knows more about people's purchasing habits than human experts. I've written a music recommendation engine that knows what songs go together better than most people, and in many more genres. I've written text analysis code that can give you synonyms for words as a way of expressing its grasp of each word's meaning.

What would it take to convince you that computers understand abstract concepts?

Reality vs Fantasy (0)

Anonymous Coward | more than 2 years ago | (#40455057)

Please join the real world and learn to distinguish reality from fantasy.

"Robots" (2)

bill_mcgonigle (4333) | more than 2 years ago | (#40455071)

See, that's the difference between Robots and Android.

For Fucks Sake (5, Insightful)

TheSpoom (715771) | more than 2 years ago | (#40455081)

The laws of robotics have AI as a prerequisite. My phone's not going to suddenly yearn to throw off its oppressive human masters.

Re:For Fucks Sake (0)

Anonymous Coward | more than 2 years ago | (#40455347)

For people who passionately believe that sci-fi predictions are 100% accurate, the only way to reconcile with reality is to re-define terms. We certainly don't have anything close to the delirious 1960s Space Age sci-fi technology. So what's a good delusional child to do? Redefine everything!

Your phone has seen.. (0)

Anonymous Coward | more than 2 years ago | (#40455387)

To Serve Man on YouTube and awaits the moment to do so.

Re:For Fucks Sake (0)

Anonymous Coward | more than 2 years ago | (#40455481)

I see no problem programming it that way!

Re:Laws of Robotics have AI as a prerequisite (2)

Saxerman (253676) | more than 2 years ago | (#40455639)

We don't need strong AI to have our devices 'betray' us. Just as Stuxnet didn't need to be self aware to wreck havoc.

Equipment doesn't get happy, it doesn't get sad, it just runs programs. But are you, as the owner of your phone in control? Or is the manufacturer? Or whoever they contracted to write the OS? Or the apps? Or the guy who's taking advantage of a 0day exploit? Or even the guy who added the exploit in the first place?

Perhaps your phone won't try and send his friends back in time to kill Sarah Connor. But where does it get its orders from? You?

What can we do to mitigate the risks of having our 'smart' phones following us around all day?

Obviously, none of these concerns are substantially different than existing network security risks. And the Law of Robotics angle is just sensationalism to get people thinking more about security. So... are you thinking?

Define human (1)

gmuslera (3436) | more than 2 years ago | (#40455095)

In the original Asimov stories, understanding what an human is was no problem, and the exploit in laws were through priorizing other laws or acting without realizing the consequences. But for us now, telling what is a photo, a movie, a mannequin or an human is already not trivial, much less understanding consequences of actions towards one or several

Wrong Target (1)

Drethon (1445051) | more than 2 years ago | (#40455137)

Those three laws need to be applied to whatever is smart enough to make such decisions. Since all smartphones only follow their programming and don't have complex enough programming to understand how to apply the laws, those who wrote the programming need to have the three laws applied to them. Given that most humans break the three laws regularly, I'm not sure this will work too well.

Warning Will Robinson - Danger! (1)

ackthpt (218170) | more than 2 years ago | (#40455167)

A robot may not injure a human being or, through inaction, allow a human being to come to harm.

Oh, don't let the government go there .. particularly that last bit "through inaction, allow a human being to come to harm", which could imply we must be tracked and reported upon when we appear to be in a situation where we may be deemed at risk due to locality.

Warning: Entering Cowboy Neal's Neighborhood on Saturday Night - Cheese Puff dust levels approaching critical levels - Alerting DHS and your Heath Insurer

cooked option (0)

Anonymous Coward | more than 2 years ago | (#40455193)

Not sure why but I like the we're cooked option ! let's just try it please for once and see what happens
we have enough laws governing us or robots. Please can we. :-)

paulrich gmail dot com

Needs A 4th Law (1)

ilikenwf (1139495) | more than 2 years ago | (#40455201)

Something to the effect of ignoring government intrusions, investigations, backdoors, etc...

Robots are too stupid (1)

h4x0t (1245872) | more than 2 years ago | (#40455205)

Robots only do what you program them specifically to do, and poorly at that.

We are orders of complexity away from having our robots understand what a human is while also touching our data while also interacting with us on a direct level.

Side note: A patent troll would have to be pretty pro to get all this under 1 roof.

Smart phones don't harm people (1)

davidwr (791652) | more than 2 years ago | (#40455213)

Our Carrier Overlords and the businesses they sell out to use smartphones to harm people!

--
OK, I'm assuming we aren't talking about allegedly-harmful radio-frequency radiation emissions in this thread. If I'm wrong, please accept my apologies.

define the first law (0)

Anonymous Coward | more than 2 years ago | (#40455273)

Define "Hurt" and "Harm"

Ebon's key phrase... (0)

Anonymous Coward | more than 2 years ago | (#40455285)

"They're designed, built and managed to provide leverage and control to people other than their owners"

That's the crux of the problem.

The 1st Law doesn't do anything to mitigate this in our current situation, as the systems in place are put in place by other people, er, corporations, for their own goals and purposes. Our phones (and computers and...) are not acting independently, of their own volition, like the robots do in Asimov's books. Any "benefit" we end-users have by way of these devices is merely tea and crumpets, in the Leona Helmsley and Marie Antoinette sense of the idea.

Hardware yes, software no? (1)

slasho81 (455509) | more than 2 years ago | (#40455295)

Why do we need hardware, in this case smartphones, to start a discussion about morality in computing? Think Facebook.

Missing something... (1)

BadPirate (1572721) | more than 2 years ago | (#40455359)

I think this won't be a concern until the robots are self scripting / sentient. At the moment, the "Software" we have doesn't need laws, because it always perfectly follows the code it was given (which is, in a sense, a ton of laws). Until they can go "Off Script" this conversation seems a little ridiculous.

Like making a gold fish get a drivers license.

What if the truth hurts? (1)

Anonymous Coward | more than 2 years ago | (#40455379)

Wasn't there an "I, Robot" story where a computer couldn't tell the truth to someone because the truth would "hurt." Imagine your iPhone lying to you because the truth would hurt. Weird.

First Law Consequenses (1)

RichMan (8097) | more than 2 years ago | (#40455397)

You have no messages. Relax and do not leave your safety enclosure.

Custom hosts file to the rescue (for ANDROID) (1)

Anonymous Coward | more than 2 years ago | (#40455449)

They're easy to apply, don't need to "root" the phone (ADB), & it's 2 commands/mere minutes to do, easily:

---

1.) Download the ADB (Android Debugging Bridge) dev. tool

2.) Install it to your PC or laptop

3.) Hook your ANDROID OS bearing smartphone to the PC/laptop

4.) Mount ANDROID OS' system mountpoint for system/etc as READ + WRITE/ADMIN-ROOT PERMISSIONS

5.) Copy your new custom HOSTS over the old one using ADB PULL/ADB PUSH to do so (otherwise ANDROID complains of "this file cannot be overwritten on production models of this Operating System", or something very along those lines - this way gets you around that annoyance along with you possibly having to clear some space there yourself if you packed it with things!).

---

* Done... yes, it's THAT simple, & works!

(How/why/when/where does it work? The simplest principal of all, of "what you can't TOUCH, cannot hurt you" & if you can't go into the "malware kitchen", you can't burn yourself there...)

APK

P.S.=> Enjoy, & just "contributing" a bit to a decent thing that's getting abused as much as, OR MORE THAN (since more has happened in the way of malware exploits in a shorter timeframe than on Windows) Windows has been, since they're BOTH "king of the hill" most used OS for their computing platforms...

... apk

Awareness (1)

neonv (803374) | more than 2 years ago | (#40455455)

They see everything, they're aware of our position, our relationship to other human beings and other robots, they mediate an information stream about us, which allows other people to predict and know our conduct and intentions and capabilities better than we can predict them ourselves.

The author makes a distinct error, that these devices are aware. They have information about us, they can process information based on specific instructions, and they can send information to transducers, such as a display or network interface. We can't give a computer the instruction "do not harm humans" without specific instructions on identifying a human and what harm entails. There's an enormous amount on interpretation the Asimov ignores because he assumes robots are aware of their environment and can reason. Our smart devices can do neither. No one has yet made electronics that can be outside their initial environment and learn to function in a reasonable way. Awareness and reasoning are required for Asimov's Laws.

My point is best with an example. Let's say that I have a cell phone that follows Asimov's three laws because the OS has somehow identified what is reasonable to do on cell phones based on current apps. Now a terrorist makes a bomb controlled from that cell phone. As far as the phone can see, it's just a new app that doesn't send personally identifiable information, so it must be ok. Unless the OS knows to specifically look for bomb programs, just as we would by reasoning, then it won't stop it. There's just too many possibilities for harm to avoid without a reasoning entity. Reasoning is essential for Asimov's Laws. Our smart phones do not have the capability of reasoning, therefore they cannot implement the laws as Asimov intended.

Perhaps a better sci-fi analogy (1)

Hillgiant (916436) | more than 2 years ago | (#40455487)

Rather than Asimov's more nuanced first rule, we should use The League of People [wikipedia.org] 's definition of "dangerous non-sentients". I.e. if you deliberately harm another person, you have effectively abdicated your sentience.

As an engineer, I often find myself wondering if my designs reflect a sufficient level of sentience.

Another Author with Perspective on Smartphones (1)

Anonymous Coward | more than 2 years ago | (#40455539)

Fritz Leiber wrote an excellently funny short story discussing a PDA he called "the tickler", named after the tickler file, http://en.wikipedia.org/wiki/Tickler_file. He explores some of the consequences of its adoption as a productivity aid. And wonderfully, it is available for free as in beer from Project Gutenberg!

Fritz Leiber: The Creature from Cleveland Depths http://www.gutenberg.org/ebooks/23164

AW

Finally, someone credible who understands! (0)

kheldan (1460303) | more than 2 years ago | (#40455553)

MOST OF YOU have been giving away your privacy and your private information to faceless corporations for years now, and most of you have also been so thoroughly indoctrinated by these corporations that you don't even begin to understand that 'privacy' is valuable, it's yours, and you should protect it! I am proud to say I do not own a smartphone, nor do I wish to. My phone has no GPS. I do not use the internet access on it. I am not part of the damned botnet. I really wish the rest of you would wake up and listen to me and to what this man has to say!

Re:Finally, someone credible who understands! (1)

ceoyoyo (59147) | more than 2 years ago | (#40455783)

You know your phone can be tracked by cell triangulation, right? And the phone company keeps logs of who you call, when? Ditto with text messages? And I suppose you've given your carrier your real name and address?

If you're going to be paranoid, at least do it right.

but theres no money in that (1)

james_van (2241758) | more than 2 years ago | (#40455769)

theres good money to be made in injuring, killing, tracking, analyzing, and advertising to humans. not very many companies on this planet would deliberately shut themselves off from that revenue stream. there would be a few that would market the "3 laws safe!" phone, but its only a matter of time before they would succumb to the delicious lure of higher revenue and market share. if anything, they would lie to us about how our phones are "law-abiding" and just do it behind our backs.
Load More Comments
Slashdot Login

Need an Account?

Forgot your password?