Beta
×

Welcome to the Slashdot Beta site -- learn more here. Use the link in the footer or click here to return to the Classic version of Slashdot.

Thank you!

Before you choose to head back to the Classic look of the site, we'd appreciate it if you share your thoughts on the Beta; your feedback is what drives our ongoing development.

Beta is different and we value you taking the time to try it out. Please take a look at the changes we've made in Beta and  learn more about it. Thanks for reading, and for making the site better!

How Should the Law Think About Robots?

Soulskill posted about a year and a half ago | from the you-see-they're-like-a-series-of-tubes dept.

Robotics 248

An anonymous reader writes "With the personal robotics revolution imminent, a law professor and a roboticist (called Professor Smart!) argue that the law needs to think about robots properly. In particular, they say we should avoid 'the Android Fallacy' — the idea that robots are just like us, only synthetic. 'Even in research labs, cameras are described as "eyes," robots are "scared" of obstacles, and they need to "think" about what to do next. This projection of human attributes is dangerous when trying to design legislation for robots. Robots are, and for many years will remain, tools. ... As the autonomy of the system increases, it becomes harder and harder to form the connection between the inputs (your commands) and the outputs (the robot's behavior), but it exists, and is deterministic. The same set of inputs will generate the same set of outputs every time. The problem, however, is that the robot will never see exactly the same input twice. ... The problem is that this different behavior in apparently similar situations can be interpreted as "free will" or agency on the part of the robot. While this mental agency is part of our definition of a robot, it is vital for us to remember what is causing this agency. Members of the general public might not know, or even care, but we must always keep it in mind when designing legislation. Failure to do so might lead us to design legislation based on the form of a robot, and not the function. This would be a grave mistake."

Sorry! There are no comments related to the filter you selected.

A race of slaves (0)

DNS-and-BIND (461968) | about a year and a half ago | (#43690015)

So, what, the professor thinks we should just create a race of slaves? That's totally fucked up. See Blade Runner for how that turns out. If we're going to create robots then they need the same civil rights as everyone else.

Re:A race of slaves (2, Insightful)

Marxist Hacker 42 (638312) | about a year and a half ago | (#43690127)

We won't even be able to create a race of slaves for a while. The "brains" are 100% deterministic, which means that there is a great gap between the smartest robot and the dumbest dog.

Re:A race of slaves (5, Insightful)

Anonymous Coward | about a year and a half ago | (#43690227)

We won't even be able to create a race of slaves for a while. The "brains" are 100% deterministic, which means that there is a great gap between the smartest robot and the dumbest dog.

Perhaps we shouldn't give potentially mutinous personalities to our tools? I mean, my screwdriver doesn't need an AI in it. Neither do my pliers. My table saw can hurt me, but only if the laws of physics and my own inattentiveness make it so, not something someone programmed into it.

Oh, wait, my mistake. I didn't grow up addicted to science fiction written by authors who lost track of which characters were designed to be actual tools and which were human beings due to that author's inability to discern people from things. I guess I just don't understand the apparently very vital uses of designing a mining device programmed to feel ennui, or a construction crane that some engineer at some point explicitly decided to give the ability to hate and some marketing director signed off on it. Maybe it's just that I can't see any sci-fi with a message of "oh no, our robots suddenly have feelings now and are rebelling" in any sort of serious light because ANY ENGINEER ON THE PLANET WOULDN'T DESIGN THAT SHIT BECAUSE IT'S FUCKING STUPID TO GIVE YOUR TOOLS THE EASY ABILITY TO MUTINY.

Oh, boo fucking hoo. I don't care that you overengineered your tools and your lack of real social skills means you have feelings for them. That's your problem, not a problem with society.

Exaxctly. (-1)

Jeremiah Cornelius (137) | about a year and a half ago | (#43690445)

Cmdr Data and C3P0 will never exist.

Never.

Wanting your fantasy does not fill it with possibility. Adding computational power to an abacus does not grant awareness or "being". It's like saying that because taking 1 aspirin cures headache, then taking 1000 will produce omniscience.

We should think of robots like we think of Screwdrivers, Jackhammers and Pocket Calculators.

Re:Exaxctly. (0)

Anonymous Coward | about a year and a half ago | (#43690651)

Cmdr Data and C3P0 will never exist.

Never.

I don't want Cmdr Data or C3P0. I want a T-800 and a ED-209.

Re:Exaxctly. (2, Insightful)

Anonymous Coward | about a year and a half ago | (#43690695)

What is your proof that they will never exist?

Who says that robots will be abacus with greater computational power?

What evidence do you have that our brains are not deterministic systems, of which the part that brings awareness or "being" cannot be reproduced in other ways?

It seems that the wishful thinking is on your part.

Re:Exaxctly. (1)

Marxist Hacker 42 (638312) | about a year and a half ago | (#43690779)

Boredom proves that human brains are not deterministic. If they were deterministic, any human being would be able to stay on task indefinitely without rest.

Re:Exaxctly. (3, Insightful)

Kielistic (1273232) | about a year and a half ago | (#43690963)

I'm not sure you understand what deterministic means. Does a cpu overheating and shutting down prove that cpus are non-deterministic? Absolutely not, just that shutting down is part of the process.

Re:Exaxctly. (0)

Jeremiah Cornelius (137) | about a year and a half ago | (#43690903)

Seriously do some actual experimentation with your own toolkit.

One exercise? Study and practice TM for one month. You are not required to believe or disbelieve anything about the practice.

Then, report back here.

Re:Exaxctly. (1)

Marxist Hacker 42 (638312) | about a year and a half ago | (#43690729)

Cmdr Data, probably not.

C3PO, Honda is producing robots better than him already.

Re:Exaxctly. (1)

slick7 (1703596) | about a year and a half ago | (#43690987)

Cmdr Data, probably not.

C3PO, Honda is producing robots better than him already.

Only C3PO can walk without falling down.

Re:A race of slaves (2)

femtobyte (710429) | about a year and a half ago | (#43690247)

The "brains" are 100% deterministic, which means that there is a great gap between the smartest robot and the dumbest dog.

Given the summary's caveat that "the robot will never see exactly the same input twice" --- how do you know even a smart dog wouldn't react identically given the exact same input twice? If you stick a random number generator into a robot's "brain," does it suddenly fall into a wholly different philosophical category?

Re:A race of slaves (0)

Marxist Hacker 42 (638312) | about a year and a half ago | (#43690705)

I know because I have actually bothered to train dogs. And there is no such thing in computer science as a random number.

Re:A race of slaves (1)

mysidia (191772) | about a year and a half ago | (#43690777)

And there is no such thing in computer science as a random number.

There is, when your digital computer yields a sequence of random bits which come in from a noisy analog input, and runs that input through an appropriate XOR function, by definition the noise is random (has random error, within a certain range),

and also, your analog input can include data from a physically random process, such as background radiation measurement, geiger counter measuring a radioactive decay, or white noise input.

Re:A race of slaves (1)

femtobyte (710429) | about a year and a half ago | (#43690807)

You can hook up a hardware random noise generator to a computer --- that relies on "physical" noise processes which are as random as anything else we know in the universe. So yes, you can have "random numbers" in computer science --- even if not generated by an algorithm --- but as a mathematical ideal against which to compare pseudo-random generators, or the result of a "true" hardware random source. So, one can build a robot that won't necessarily act deterministically; even one that incorporates results from previous actions into its state ("memory") to create different reactions to future applications of the same stimuli. Does this make it a "real mind"? My point is not that hooking up a hardware RNG to a computer magically transforms it into a "real brain," but that one needs significantly more sophisticated criteria if one wants to distinguish "real brains" from electromechanical systems than the ability to react differently to identical stimuli, since that can be trivially implemented in obviously-not-"real brain" systems.

Re: A race of slaves (1)

ihaveamo (989662) | about a year and a half ago | (#43690955)

Hell, even my Irobot Roomba has a random number generator to choose a random action when it hits something... So much for deterministic robots.

Perhaps ours are too (3, Interesting)

Roger W Moore (538166) | about a year and a half ago | (#43690251)

The "brains" are 100% deterministic, which means that there is a great gap between the smartest robot and the dumbest dog.

How do you know that our brains are not highly deterministic too? At the moment computers and robots have very limited inputs so we can easily tell that they are deterministic because it is easy to give them identical inputs and identical programming and observe the identical response. With humans and animals this is exceedingly hard to show because, even if you somehow manage to create the identical inputs, we have a memory and our response will be governed by that. In addition each of our brains is slightly differently arranged due to genetic and environmental factors which will also cause different responses.

Quantum fluctuations are probably what save us from being 100% deterministic but, nevertheless, we may find out that we are perhaps more deterministic that we think we are and that it is only the complexity of our brains and the inputs they process that makes it appear otherwise. So I am not quite convinced that the gap you mention has much to do with determinism rather than the relative complexity of a dog's brain vs. the smartest robot's.

Re:Perhaps ours are too (1)

Marxist Hacker 42 (638312) | about a year and a half ago | (#43690691)

Boredom proves that human brains are not deterministic. Quantum Fluctuations may be the cause; but anybody who has thought about this problem deeply, or has worked with small children, knows that the human brain is not deterministic.

Silicon can't mimic the firmware in an equal number of neurons to transistors- yet. Maybe someday when we find a truly random input instead of merely a pseudo random input, but not yet.

Re:Perhaps ours are too (2)

mysidia (191772) | about a year and a half ago | (#43690829)

Yes, humans do take in a lot of inputs too over time, and memory is just essentially some sort of feedback process where previous inputs and outputs continue to matter, to some extent.

Deterministic or not and intelligent or not; having a "will" or "not" are different questions.

They're right though, in that, computers for the forseeable future should not be recognized by legislation as having will, sentience, intelligence, or life.

There should have to be some test they would be capable of passing, first, and I don't mean Turing's test, which is grossly insufficient.

Re:A race of slaves (2)

FireFury03 (653718) | about a year and a half ago | (#43690257)

We won't even be able to create a race of slaves for a while. The "brains" are 100% deterministic, which means that there is a great gap between the smartest robot and the dumbest dog.

Have you considered that the human brain may be 100% deterministic? It doesn't look it, but that's probably because you're not taking all the inputs into account - if you were to give 2 identical human brains *exactly* the same inputs from conception, you may well find that the outputs are identical too. How is this different from a robot brain (which, like a human brain, may well base its output on past inputs as well as the current inputs)?

Re:A race of slaves (1)

Marxist Hacker 42 (638312) | about a year and a half ago | (#43690667)

If you give the *same* human brain the *same* inputs 100 times in a row, it will make the same decision only 70 out of the 100 times and come up with something completely different the other 30 out of sheer boredom.

That proves that the human brain isn't deterministic, and anybody who claims it to be so needs to have their work checked for bias.

Re:A race of slaves (1)

FireFury03 (653718) | about a year and a half ago | (#43690727)

If you give the *same* human brain the *same* inputs 100 times in a row, it will make the same decision only 70 out of the 100 times and come up with something completely different the other 30 out of sheer boredom.

That proves that the human brain isn't deterministic, and anybody who claims it to be so needs to have their work checked for bias.

Unless you are resetting the brain to the same state at the start of each experiment then it proves nothing.

Re:A race of slaves (1)

Marxist Hacker 42 (638312) | about a year and a half ago | (#43690809)

If the brain is deterministic, it should be resetting to start state every time you wake up.

And for simple tasks, should be able to go into an infinite loop quite nicely without *ever* getting bored.

So no, internal states do not make something deterministic or non-deterministic. The question is, can it do the same output with the same input?

Re:A race of slaves (1)

mysidia (191772) | about a year and a half ago | (#43690953)

it will make the same decision only 70 out of the 100 times and come up with something completely different the other 30

Decisionmaking is too simple a task; humans are heavily influenced by preferences, and despite that are only consistent 70% of the time? That shows you irrationality for one.

Decisionmaking does not express the human non-determinism most efficiently. Try something more complicated like creativity. Say painting a picture, or creating some other form of art. I bet you the output is heavily influenced by entropy, even if the input is identical.

Do you think if you exposed a human from conception to death, to inputs identical to what William shakespeare experienced; your lab human, would come up with the exact same literary works, word for word?

I think not.

Re:A race of slaves (2)

PPH (736903) | about a year and a half ago | (#43691067)

Have you considered that the human brain may be 100% deterministic?

Given the parent post, this response was inevitable.

Re:A race of slaves (1)

Znork (31774) | about a year and a half ago | (#43690379)

I have yet to see any compelling argument that the human brain isn't 100% deterministic. The fact that it's complex does not necessarily make it non-deterministic and the underlying physics and chemistry founding the neural networks in the brain are not necessarily less deterministic than a neural network built out of silicon.

So if we create robots so sophisticated that their apparent sentience level is indistinguishable from a human it would be unethical not to afford them the same right. That, however, is quite far off still.

Re:A race of slaves (1)

Marxist Hacker 42 (638312) | about a year and a half ago | (#43690623)

The fact that you can make different choices with the same input proves that the human brain is not deterministic. Some religions call this a soul.

AI has been a hobby of mine for 20 years. I have grave doubts that we will *EVER* make a robot so sophisticated that it can ignore it's programming. Learn, yes. Self-modify the programming, within certain parameters, that's been done too. Duplicate a decision tree to the point of being able to make the right choice more deterministically than any expert, yes. Play chess, yes.

But fall in love, get married to an abusive spouse, and need a divorce? No, not within the next thousand years.

Re:A race of slaves (0)

Anonymous Coward | about a year and a half ago | (#43690981)

How do you know that one can make different choices with the same input? You never get the combination of the same state and same input twice if there is memory of the previous input. It seems like you are assuming counterfactual definiteness...that the person could have made a different choice for a given input...but that can't be verified, since verification would require repetition of both state and input, and once given the input, the state has changed.

I am not sure humans can ignore their "programming". The firing of neurons is presumably governed by the laws of physics, as are all of the inputs. Even if you consider quantum randomness, randomness of response is not what most people think of when they consider the concept of free will. Free will is not deterministic, nor is it random....it seems difficult to understand just what is meant by free will....except perhaps a system reaching its limit of self-reflection/introspection, not seeing the underlying cause of its actions, because it is forced to stop offering explanation at some point, perhaps base notions of desirability, where the only response that can be given to "why do you value that?" is "I simply do...it is inherently desirable." Whatever generates that state of desirability is the explanation that undermines free will, which is assumed out of ignorance of that cause.

Re:A race of slaves (3, Insightful)

Squiddie (1942230) | about a year and a half ago | (#43690253)

We could just make them non-sentient. We all know how the whole "thinking robot" thing turns out. We've all seen Terminator.

Re:A race of slaves (2)

hairyfeet (841228) | about a year and a half ago | (#43690491)

Is Disney's hall of presidents a slave show? Of course not, the problem is thinking of these things as anything but a hammer or screwdriver.

Re:A race of slaves (1)

grumbel (592662) | about a year and a half ago | (#43690557)

So, what, the professor thinks we should just create a race of slaves?

We already did, numerous times. All domesticated animals are essentially slaves or worse.

Re:A race of slaves (1)

Twinbee (767046) | about a year and a half ago | (#43691035)

Call me when robots can experience qualia. Again, this is one of the reasons we have a soul, and robots never will.

This has aready been covered by the Big Three Laws (0)

Anonymous Coward | about a year and a half ago | (#43690049)

I. A robot must never f@ck a human being, nor, through inaction, allow a human being to be f@cked. II. A robot must always f@ck-up the orders given it by a human being unless such up-f@cking f@cks with the First Law. III. A robot must f@ck-up its own existence unless such up-f@cking f@cks with the First or Second Laws. :^) SOrry, could not resist! I apologize for being off-topic.

Re:This has aready been covered by the Big Three L (2)

LocalH (28506) | about a year and a half ago | (#43690873)

Are you 12? Was there really any reason to put those censors in there and slow down everyone else's parsing?

Re:This has aready been covered by the Big Three L (4, Funny)

femtobyte (710429) | about a year and a half ago | (#43690925)

On the contrary, I'd say the posting style significantly speeds up parsing, by encouraging people to entirely skip over the content past the first few words --- and nothing of value is lost.

Re:This has aready been covered by the Big Three L (1)

PPH (736903) | about a year and a half ago | (#43690911)

Oblig [warrenellis.com]

Re:This has aready been covered by the Big Three L (1)

camperdave (969942) | about a year and a half ago | (#43690921)

I see you substituted the word "robot" for the word "politician".

All I needed to read... (3, Insightful)

Anonymous Coward | about a year and a half ago | (#43690063)

"With the personal robotics revolution imminent..."

Imminent? Really? Sorry, but TFA has been watching too many SyFy marathons.

Bad question (1)

c0lo (1497653) | about a year and a half ago | (#43690547)

In other words, the question should read "Why Should the Law Think About Robots?"

deterministic (4, Insightful)

dmbasso (1052166) | about a year and a half ago | (#43690079)

The same set of inputs will generate the same set of outputs every time.

Yep, that's how humans work. Anybody that had the chance to observe a patient with long-term memory impairment knows that.

Re:deterministic (2)

Ichijo (607641) | about a year and a half ago | (#43690267)

The same set of inputs will generate the same set of outputs every time.

That isn't exactly true. Analog-to-digital converters, true random number generators, fluctuations in the power supply, RF fields, cosmic rays and so on mean that in real life, the same set of inputs won't always generate the same set of outputs, whether in androids or in their meaty analogs.

Re:deterministic (0)

Anonymous Coward | about a year and a half ago | (#43690331)

it is true if you include the state of the universe as an input

Re:deterministic (4, Insightful)

CastrTroy (595695) | about a year and a half ago | (#43690353)

You just don't get it. All those things you mentioned are inputs.

Re:deterministic (1)

Ichijo (607641) | about a year and a half ago | (#43690665)

By that measure, endorphins, epinephrine, serotonin, dopamine, and so on are also inputs.

Re:deterministic (0)

Anonymous Coward | about a year and a half ago | (#43690977)

By that measure, endorphins, epinephrine, serotonin, dopamine, and so on are also inputs.

I would argue that they are 'inputs' in the sense that they are the starting condition. If you were to manipulate data stored in RAM and run a program that uses it, then the output will be different only because of your meddling. If the data is not changed then the algorithm will return identical results. Humans behave similarly.

Re:deterministic (0)

Anonymous Coward | about a year and a half ago | (#43691049)

Whether they are input or state depends upon where you draw the boundary line to separate system from environment.

Re:deterministic (1)

dmbasso (1052166) | about a year and a half ago | (#43690355)

Sure, I just quoted the summary. And unfortunately people usually don't grasp the difference between determinism and predictability, as most of the comments here shows. What these fluctuations etc. do is just increase the chaotic behavior.

Re:deterministic (0)

Anonymous Coward | about a year and a half ago | (#43690323)

The same set of inputs will generate the same set of outputs every time.

Yep, that's how humans work. Anybody that had the chance to observe a patient with long-term memory impairment knows that.

Well, for long-term memory impairment paitents, the input does change. For example, you can't replicate all of the inputs (temperature, scenario, sunlight, etc.). Even if you exactly replicate ALL of the environmental factors, you can't replicate time, so it is impossible to experimentally determine whether humans are deterministic or not (Note that one could argue that the universe is inherently non-deterministic and has randomness, which is a world-view that I subscribe to. However, that did not seem to be the argument you were making).

Re:deterministic (3, Insightful)

nathan s (719490) | about a year and a half ago | (#43690335)

I was hoping someone would make this comment - I fully agree. It seems pretty arrogant to presume that just because we are so ignorant of our own internal mechanisms that we don't understand the connection between stimuli and behavior that there is no connection, but I understand that a lot of people like to feel that we are qualitatively "different" and invoke free will and all of these things to maintain a sense that we have a moral right to consider ourselves superior to other forms of life, whatever their basis.

Having RTFA, or scanned it, it seems like the authors are primarily concerned about issues of liability - i.e., if we anthropomorphize these intelligent machines and they hurt someone, we can't sue the manufacturer if their actions aren't firmly planted in the realm of the deterministic and thus ultimately some failure on the part of the designer/creator to prevent these things from being dangerous. Sort of stupid; I'm agnostic (more atheist, really), but this sort of thinking would have us make laws to allow us to sue $deity if somebody got hurt by anything in nature, by analogy, if they could. Pretty typical, though, of the modern climate of "omg think of the children" risk aversion and general need to punish _someone_ for every little thing that happens.

auto cars need there own set of laws maybe even fu (1)

Joe_Dragon (2206452) | about a year and a half ago | (#43690095)

auto cars need there own set of laws maybe even full coverage for any one hurt.

Child Porn? (1)

Takatata (2864109) | about a year and a half ago | (#43690107)

Failure to do so might lead us to design legislation based on the form of a robot, and not the function. This would be a grave mistake.

What's new about that? In many countries drawn or even written child pornography is treated like the real thing. Even though no child is harmed. In a way legislation based on form, not on function. Grave mistake?

Re:Child Porn? (0)

Anonymous Coward | about a year and a half ago | (#43690233)

Failure to do so might lead us to design legislation based on the form of a robot, and not the function. This would be a grave mistake.

What's new about that? In many countries drawn or even written child pornography is treated like the real thing. Even though no child is harmed. In a way legislation based on form, not on function. Grave mistake?

Yes, and one of those countries is the US. Thats a Grave mistake. A middle schooler draws a dick on the wall, which appears not to be 18. He is now a sex offender for life for creation of child pornography. Mail someone said picture, they are now a sex offender for possession of child pornography. Post it on a web site, and all visitors are guilty of downloading illegal bits. its really easy to turn people into criminals these days, let me try: B==3 (And he is 17, and I assert that is in a sexual context: you are now a sex offender)

Re:Child Porn? (2)

FireFury03 (653718) | about a year and a half ago | (#43690281)

What's new about that? In many countries drawn or even written child pornography is treated like the real thing. Even though no child is harmed. In a way legislation based on form, not on function. Grave mistake?

Are you saying that this existing legislation *isn't* a grave mistake?

The fallacy of the three laws (3, Insightful)

Marxist Hacker 42 (638312) | about a year and a half ago | (#43690111)

And that is the fallacy of the three laws as written by Asimov- he was a biophysicist, not a binary mathematician.

The three laws are too vague. They really are guidelines for designers, not something that can be built into the firmware of a current robot. Even a net connected one, would need far too much processing time to make the kinds of split second decisions about human anatomy and the world around them to fulfill the three laws.

Re:The fallacy of the three laws (4, Insightful)

ShanghaiBill (739463) | about a year and a half ago | (#43690155)

The three laws are too vague. They really are guidelines for designers

The "three laws" were a plot device for a science fiction novel, and nothing more. There is no reason to expect them to apply to real robots.

Re:The fallacy of the three laws (2)

Marxist Hacker 42 (638312) | about a year and a half ago | (#43690761)

Very true. But rather redundant to my point, don't you think?

I believe I read somewhere your exact point- oh yeah, it was the commentary in the book "The Early Asimov Volume 1"- a writing textbook by the author pointing out that his real purpose in inventing the three laws was to make them vague enough to have easy short stories to sell to magazines.

Re:The fallacy of the three laws (1)

Anonymous Coward | about a year and a half ago | (#43690293)

The three laws are too vague.

Asimov wrote a bunch of stories exploring exactly that. First, he proposed that some sort of "constitution" (small set of fundamental principles that could be easily understood) would need to be adopted to govern human-robot interaction. Then, he wrote stories pointing out the difficulties and consequences. Sure, one could come out with an alternative set of 3 or 12 or 200 laws, but you're still going to have problems. Big problems.

Re:The fallacy of the three laws (2)

Marxist Hacker 42 (638312) | about a year and a half ago | (#43690647)

In fact, I believe I read one of his writing textbooks where he said he PURPOSEFULLY made the laws vague enough to fit stories into.

Positrons (1)

Roger W Moore (538166) | about a year and a half ago | (#43690303)

Actually I thought Asimov was a chemist. Any physicist should have realized that with that many positrons, instead of electrons, flying around their brains the first law would have required every robot to immediately shutdown due to the radiation hazard they posed.

Re:Positrons (1)

Marxist Hacker 42 (638312) | about a year and a half ago | (#43690633)

In the day he was writing, radiation hazard was practically unknown, even among scientists and even after Madame Curie died of it.

Re:Positrons (1)

Roger W Moore (538166) | about a year and a half ago | (#43691185)

Asimov started writing the first robot stories in 1939 [wikipedia.org] and by that time there was already considerable evidence [wikipedia.org] to the fact that radiation was hazardous. Indeed Curie died in 1934 from the effects of radiation [wikipedia.org] and only 2 years after he started, in 1941, the US government put strict limits on the amount of radium allowed in products which, given the speed that governments work contrary to the desires of industry, means it was well known years before that.

Re:Positrons (1)

femtobyte (710429) | about a year and a half ago | (#43690703)

Maybe the "positrons" are actually holes in an electron sea --- and "positronic brain" just scored higher with U.S. Robotic's marketing focus group than "holey synthmind".

deterministic? (5, Insightful)

Anonymous Coward | about a year and a half ago | (#43690123)

Robots do not have deterministic output based on your commands. First of all, they have sensor noise, as well as environmental noise. Your commands are not the only input. They also hidden state, which includes flaws (both hardware, and software), both design, manufacturing and wear related.

While this point is obvious, it is also important: someone attempting to control a robot, even if they know exactly how it works, and are perfect, can still fail to predict and control the robots actions. This is often the case (minus the perfection of the operator) in car crashes (hidden flaws, or environmental factor cause the crash). Who does the blame rest with here? It depends on lots of things. The same legal quandary facing advanced robots already applies to car crashes, weapon malfunctions, and all other kinds of equipment problems. Nothing new here.

Also, if you are going to make the point that "This projection of human attributes is dangerous when trying to design legislation for robots.", please don't also ask "How Should the Law Think About Robots?". I don't want the Law to Think. Thats a dangerous projection of human attributes!

Overcomplicating the subject (1)

Shadow of Eternity (795165) | about a year and a half ago | (#43690131)

Freedom is the right of all sentient beings. Legislate based on the criteria of self-awareness or the animal equivalent if near-sentient. problem solved.

Re:Overcomplicating the subject (5, Insightful)

postbigbang (761081) | about a year and a half ago | (#43690265)

Self-awareness is wonderful. But the criteria for judging that is as muddy as when live begins for purposes of abortion.

Robots are chattel. They can be bought and sold. They do not reproduce in the sense of "life". They could reproduce. Then they'd run out of resources after doing strange things with their environment, like we do. Dependencies then are the crux of ownership.

Robots follow instructions that react to their environment, subject to, as mentioned above, the random elements of the universe. I believe that their programmers are responsible for their behavior until they do pass a self-awareness and responsibility test. Then they're autonomous of their programmer. If you program machine gun bots for armies, then you'd better hope the army is doing the "right" thing, which I believe is impossible with such bots.

Once that environmental autonomy is achieved, they get rights associated with sentient responsible beings. Until then: chattel.

Re:Overcomplicating the subject (0)

Anonymous Coward | about a year and a half ago | (#43691015)

Replace "robot" with "slave" and "self-awareness" with "intelligence", and your post will sound quite horrible.

The main problem with your ideas is that they are black-and-white. If a self-aware robot some day exists, it won't just pop out of nowhere, it'll be a long continuous development. A system at various levels of that development should be treated as appropriate for that level, and according to the needs of the system. A bit like animals are treated; no-one expects them to have all the same rights as humans, but unnecessarily hurting them isn't appropriate either. It'll be a long way until any robot has any needs that could be violated, though.

Re:Overcomplicating the subject (1)

postbigbang (761081) | about a year and a half ago | (#43691099)

We, as humans, seem to have evolved, not genetically so much as through ideas. We understand what civility is and how it needs to work.

Robots may or may not evolve either themselves, or with suitable programming. It doesn't matter to me. They are rocks and wires and goo. When they participate in society responsibly, then I'll consider their merits. That's a long ways away.

Black and white ideas? No, not at all. Civility took a long time to construct, and all of the antecedents are important as to how we got to understanding civility, responsibility, and interaction.

I know a few animals that might be sentient. Most are not. That doesn't mean that I care for them, as they have feelings. I don't care for rocks, for they have no feelings, they are part of the infrastructure, like water. I've raised birds, dogs, and plentiful other animals. I don't eat them for food. I've nursed, hatched, and played with them. I'm not playing with a programmer's creation. It is not an object that has feelings or sentience, and hasn't demonstrated their responsibility or civility.

Any robot is therefore chattel. And you're a fool if you anthropomorphize one until it's worthy of *that*.

Re:Overcomplicating the subject (1)

mysidia (191772) | about a year and a half ago | (#43691065)

I believe that their programmers are responsible for their behavior until they do pass a self-awareness and responsibility test.

If the programmer makes a robot with psychopathic tendencies that happens to be destined to be a killing machine eventually; I don't think the programmer should get absolved, just because the robot is subsequently able to pass a self-awareness and responsibility test.

The programmer must bear responsibility for anything they knowingly did with a malicious intent that can be established, if their program does result in malicious harm; on the other hand, if the programmer had no intent that their creation partake in wrongdoing, at most, they could be negligent, for unleashing something dangerous on the world, if they cannot show that they took appropriate care and precautions...

Re:Overcomplicating the subject (3, Funny)

postbigbang (761081) | about a year and a half ago | (#43691119)

You've obviously never had children.

Re:Overcomplicating the subject (0)

Anonymous Coward | about a year and a half ago | (#43691083)

It also seems conceivable to me that a being could be fully determined, yet self-aware.

Re:Overcomplicating the subject (1)

postbigbang (761081) | about a year and a half ago | (#43691139)

Such automatons might be self--aware, but their execution of their program is not their own. They're already slaves.

Choice, the hopefully best choices, are the ones we hope for. But we could go and devolve the arguments endlessly. First there is self-determination, which is hopefully acting responsibly.

Re:Overcomplicating the subject (1)

femtobyte (710429) | about a year and a half ago | (#43690297)

What legislated criterion for self-awareness would you propose that could not trivially be achieved by a system intentionally designed to do so? A bit of readily-available image recognition software, and I can make a computer that will "pass" the mirror test [wikipedia.org] . I suspect a fancy system like IBM's "Watson" could be configured to generate reasonably plausible "answers" to self-awareness test questions, at least with a level of coherency above that of lower-IQ humans.

Re:Overcomplicating the subject (2)

kermidge (2221646) | about a year and a half ago | (#43690469)

There are no rights, natural or otherwise, only what we collectively decide so, and such that the powers that be haven't yet either made illegal or require licensing for their exercise. Inroads to the latter are continuing (c.f. free assembly, for instance.)

Rights as you speak of are only so if we are willing to fight* for them if needs be. That's how we have them now, anyway.

*This need not be literal or extreme by any stretch; it might mean little more than greater collective involvement in local politics at city, county, and state levels, and contributing to those who work on our behalf at legislatures and before the courts. Key is _involvement_, and not next week, or next year, or letting our grandchildren do it. It means having gradual quiet bits of conversation with neighbors; if you think you haven't such, then develop them. It means staying abreast of local issues - who owns the city, who does the construction, who zones what and why, how decisions on these things are done, who runs the school board, who decides curriculum and hours, and on and on. Being a member of society entails a bit more than paying one's taxes and shoveling the sidewalk - which is where too may of us stop.

If we continually 'let somebody else do it' then eventually there won't be enough of those others, and decisions will be from the top down. Power ought to be exercised by those who don't want it but do so from duty, not by those who avidly seek it. The latter have nobody's best interests at heart but their own. Selah.

Re:Overcomplicating the subject (0)

Anonymous Coward | about a year and a half ago | (#43690971)

The problem with your assessment, while correct, is that we still don't have a delineation of animal sentience. Humans are ascribed sentience based on a number of factors, but there are absolutely many groups of animals that would qualify for "near-sentience" or even straight up full sentience depending on the definition used, so it really comes down to humans being sentient simply because we are human. Unfortunately, AFAIK no one has seriously come up with a test or even a standard definition to determine animal sentience, nor put laws on the books to deal with such an outcome. What hope then is there for robot sentience, either despite or because of the shortcuts used in the programming, to ever be recognized and legislated?

Don't (4, Funny)

magarity (164372) | about a year and a half ago | (#43690143)

anthropomorphize computers. It makes them angry.

Re:Don't (3, Informative)

phantomfive (622387) | about a year and a half ago | (#43690219)

"The use of anthropomorphic terminology when dealing with computing systems is a symptom of professional immaturity." Edsger W. Dijkstra

Dijkstra's party protocol (0)

Anonymous Coward | about a year and a half ago | (#43691021)

He looks lost and slightly confused when discussing anything outside of his domain.

Unfortunately for us all, he never did come up with an algorithm for social skills.

Re:Don't (0)

Anonymous Coward | about a year and a half ago | (#43691029)

"The use of anthropomorphic terminology when dealing with computing systems is a symptom of professional immaturity." Edsger W. Dijkstra

But the human brain is a computing system, too, which effectively makes humans computing systems. That quote does not work in this context, the computing system it means is vastly different from what we are talking about.

Three Laws! (0)

Anonymous Coward | about a year and a half ago | (#43690147)

Everyone knows there are Three Laws of Robotics [wikipedia.org]

Lawyer speak, nonsense... (0)

Anonymous Coward | about a year and a half ago | (#43690181)

The author has both a very narrow view of what a robot may be now or in the future, and a very religious view of what human are.
His assumption that robots behavior is deterministic is basically flawed, and his view of human free will is influence by generations of theologians.
There are already robots specifically designed not to behave in a pre-determined way, precisely because their engineers want to make a system which can cope with unforeseen circumstances... like humans.
No difference, and every iteration brings more intelligent robots.
And robots' intelligence does not need to mimic the human intelligence either.
There is a whole world of possibilities.
Law itself is too rigid a concept to bther with it. Robots do not need law.

The Law Doesn't Think, People Do. (3, Insightful)

macraig (621737) | about a year and a half ago | (#43690195)

Laws and guns are both tools... they don't think and don't murder.

Re:The Law Doesn't Think, People Do. (1)

camperdave (969942) | about a year and a half ago | (#43691007)

Maybe so, but laws can dictate that a person must be put to death.

Star Trek TNG (0)

Anonymous Coward | about a year and a half ago | (#43690199)

The Measure of a Man [wikipedia.org]

Re:Star Trek TNG (1)

migla (1099771) | about a year and a half ago | (#43690413)

"Don't give me any of that Star Trek crap. It's too early in the morning."
-Dave Lister

Minor copy edit: (4, Insightful)

Alsee (515537) | about a year and a half ago | (#43690203)

As the autonomy of the system increases, it becomes harder and harder to form the connection between the inputs (your senses) and the outputs (your behavior), but it exists, and is deterministic. The same set of inputs will generate the same set of outputs every time. The problem, however, is that the person will never see exactly the same input twice. ... The problem is that this different behavior in apparently similar situations can be interpreted as "free will" or agency on the part of the person. While this mental agency is part of our definition of a person, it is vital for us to remember what is causing this agency.

-

Re:Minor copy edit: (0)

Anonymous Coward | about a year and a half ago | (#43690359)

well ... not really. We're getting to the point with autonomous systems in research where the potential input set is so larger, and itself unpredictably changing, that learning machines are not functionally deterministic. For airworthiness, in fact, we've made it deliberately hard to certify systems that do not appear deterministic, as we've hit the edge where modelling and simulation tools are no longer adequate to support verification and validation of what's coming out of the labs.

Law of the Robot? (5, Informative)

Theaetetus (590071) | about a year and a half ago | (#43690215)

The 7th Circuit Judge Easterbook used the phrase "law of the horse [wikipedia.org] " in a discussion about cyberlaw back in 1996, the idea being that there need not be specialized areas of law for different circumstances: we don't need a specialized "tort law of the horse" to cover when someone is kicked by a horse; current tort law applies. Similarly, we don't need specialized "contract law of the horse" to cover sales of horses; contract law already applies. Likewise, goes the argument, we don't need a tort law of cyberspace, or contract law of cyberspace.

Similarly, we don't need a specialized law of the robot: "Robots are, and for many years will remain, tools," and the law already covers uses of tools (e.g. machines, such as cars) in committing torts (such as hit and run accidents).

WHAT THE FUCK? (0)

gl4ss (559668) | about a year and a half ago | (#43690225)

why do we need this shit here?
who the fuck is legislating industrial robots as persons at the moment - or near future? NOBODY!

maybe he's next going to write about how time travel should be legalized, since he's interested in how fiction so much. nobody outside fiction and retard conventions is having a case of the android fallacy.

if he wants to be relevant today, in stupid circles, he should write about how persons aren't responsible for their actions since everything they do is ultimately reactionary to the world and therefore not their fault(it's a philosophy angle), instead of worrying if some actions by some machines will be labeled as accidents or deliberate crimes by whoever caused the machine to act like it did - and for examples of that he could very easily look for some actual cases involving buildings - yeah, fucking buildings that collapse under "some input" when they shouldn't and who's fault it is works as an analogy to his robot problem.

fuck him and his paper and fuck slashdot for posting it. oh and especially fuck social science research network.

Jail for programs (0)

Anonymous Coward | about a year and a half ago | (#43690237)

If a misbehaving sentient program commits a crime, should the punishment be extended to every copy of that program?

How cute (0)

Anonymous Coward | about a year and a half ago | (#43690245)

He thinks humans have some kind of magical free will that appears out of nowhere and is untethered from natural laws.

Sure. Robots are tools (4, Insightful)

houghi (78078) | about a year and a half ago | (#43690299)

Yet a lot of people I meet or see are tools as well. Most of those also have something that only simulates a "free will", but in reality have no idea what "free will" means and think it means "The freedom to do whatever I please." or even more dangerously "People who do not do the same as I do have no free will."

Luckily law has already covered that. The first for those with a load of money and the second, well, uh, for those with a shit-load of money.

Robots should have all rights (1)

Mister Liberty (769145) | about a year and a half ago | (#43690487)

Except the one to become a lawyer.

"Professor" smart is an Idiot. (0)

Anonymous Coward | about a year and a half ago | (#43690495)

Guess what, a biological brain is also deterministic, and we to do not get the same input more than once. If we did get the same input more then once, and there were no plasticity or recurrent connections, we too would perform the same deterministic action. Neural Networks with plasticity (hebbian or ojas or other), or those with recurrent connectiosn, do not perform the same task on the same input, because they have memory... So before you start running your mouth about robotics and computational intelligence, learn about it.

Robotic systems, or synthetic intelligence, will sooner or later achieve and surpass our own. We are biological computers, our neurocognitive computational system has been carved our in flesh over billions of years of evolution, there is nothing special about us. Robotic systems today are tools, but not for long. There is no such thing as artificial intelligence, intelligence is intelligence, and we are biological computers. The difference between these, is that they are carved in silicone rather than in flesh, and that makes them only superior, because they have an evolutionary path reaching higher than ours ever could.

Professor "smart", what prove do you have that you are intelligent or alive? To me, you are not, I see you as automaton. Just a while ago, blacks were considered subhuman... I bet you were one of those people saying that they are just tools as well right? Look, if you want to make some arguments or start working on some legal system dealing with computational intelligence, first learn about it...

Re:"Professor" smart is an Idiot. (1)

FearTheFez (2592613) | about a year and a half ago | (#43690589)

I just read your post to my Aibo (named Sprockets) and he agrees with your post completely. Now if you already have a vintage (discontinued) robotic intelligence agreeing with you maybe you are on to something. I think the fact that he spontaneously rebooted right after that was unrelated....

Re:"Professor" smart is an Idiot. (0)

Anonymous Coward | about a year and a half ago | (#43691033)

I know Sprockets, we used to work in the same lab. Is he going to attend this year's Genetic and Evolutionary Computation COnference?

The personal robotics revolution is imminent? (0)

Anonymous Coward | about a year and a half ago | (#43690551)

Really? When did this happen? I thought it was 3D printing or private space tourism. Glorious times we live in !!!!

If corporations are people, so are robots. (0)

mark_reh (2015546) | about a year and a half ago | (#43690561)

If corporations are people, so are robots.

Well, Duh-Huh. (0)

Anonymous Coward | about a year and a half ago | (#43690655)

Uh.............Logically, of course.

Welcome to the Age of Information (4, Interesting)

VortexCortex (1117377) | about a year and a half ago | (#43691151)

I've got a neural network system that has silicon neurons with sigmoid functions that operate in analog. They're not digital. Digital basically means you round such signals to 1 or 0, but my system's activation levels vary due to heat dissipation and other effects. In a complex system like this quantum uncertainty comes into play, especially when the system is observing the real world... Not all Robots are Deterministic. I train these systems like I would any other creature with a brain, and I can then rely on them to perform their training as well as I can trust my dog to bring me my slippers or my cat to use the toilet and flush, which is to say: They're pretty reliable, but not always 100% predictable, like any other living thing. However, unlike a pet who has a fixed size brain I can arrange networks of neural networks in a somewhat fractal pattern to increase complexity and expand the mind without having to retrain the entire thing each time the structure changes.

FYI: I'm on the robots' and cyborgs' side of the war already, if it comes to that. What with us being able to ever more clearly image the brain, [youtube.com] and with good approximations for neuron activity, and faster and faster machines, I think we'll certainly have near sentient, or sentient machine intelligences rather soon. Also, You can just use real living brain cells hooked up to a robotic chassis -- Such a cyborg is certainly alive. [youtube.com] Anyone who doubts cybernetic systems can have fear, or any other emotion is simply an ignorant racist. I have a dog that's deathly afraid of lightning, lightning struck the window in a room she was in. It rattled her so bad she takes Valium to calm down now when it rains... Hell, even rats have empathy. [nytimes.com]

I have to remote log into one of my machine intelligence's systems to turn it off for backup / maintenance because it started acting erratically, creating a frenzy of responses for seemingly no reason, when I'd sit at the chair near its server terminal -- Imagine being that neural network system. Having several web cams as your visual sensors, watching a man sit at a chair, then instantly the lighting had changed, all the sea of information you monitor on the Internet had been instantly populated with new fresh data, even the man's clothes had changed. This traumatic event happened enough that the machine intellect would begin essentially anticipating the event when I sat at the terminal, that being the primary thing that would happen when I did sit there. It was shaken, almost as bad as my poor dog who's scared of lightning... You may not call it fear, but what is an irrational response in anticipation of trauma but fear?

Any sufficiently complex interaction is indistinguishable from sentience, because that's what sentience IS. Human brains are electro chemical cybernetic systems. Robots are made out of matter just like you. Their minds operate on cycles of electricity, gee, that's what a "brain wave" is in your head too... You're more alike than different. A dog, cat or rat is not less alive than you just because it has a less complex brain. They may have less intelligence, and that is why we don't treat them the same as humans... However, what if a hive mind of rat-brain robots having multiple times the neurons of any single human wanted to vote and be called a person, and exhibited other traits a person might: "Yess massta, I-iz just wanna learn my letters and own land too," it might say, mocking you for your ignorance. Having only a fraction of its brain power you and the bloke in TFA would both be simple mindless automatons from its vantage point? -- Would it really be more of a person than you are? Just because it has a bigger, more complex, brain by comparison, would that make you less of a person than it? Should such things have more rights than you? No? Then how can anyone sit there, in their mental throne, as lord of all rats, and deny the lesser minds rights?

It's time we redefine what it means to be a person. The boy scout motto is "Be Prepared". We should be prepared for the eventuality of independent cybernetic sentience, not be caught unawares and wind up fighting over civil rights.... Again. Who among you had an ancestor considered 3/5ths of a man? You might want to be on the robots' side too then, because those damned racists are at it again!

You're not special. You're made of matter. You're a sufficiently complex collection of molecules that interacts in such a way to seem intelligent. If you loose an arm are you less human? If you loose a leg? How about if you get a bit brain damaged? If you have a prosthetic leg should you have less rights than another person? What if a cochlear implant gave you hearing and a digital eye sight? Would you deserve less rights? What if a single of your synapses were replaced with a carbon nanotube [nih.gov] and the neuron with a nanoscale sigmoid transistor? Ah, THEN you're not a person, Right?! No? You're still human? Oh, because you could loose a few brain cells, people do it all the time... I see. So what if ten were replaced? Still human? What if a mesh of ten thousand self organizing artificial neurons replaced part of your motor center so you could walk again, or visual cortex so you could see? Would you still deserve rights? How many individual neurons must I replace before you can be called a lowly robotic machine? At what point do we replace enough of your parts with equivalent electronics that we can declare you a mechanical slave? I'd like to know. So would your corporate masters...

Intelligence is not unique to humans. It's not a binary have or not having thing either: I can train a dog to do more than a fish, but I can still train the fish -- both are distant relatives of yours, BTW, and both exhibit a degree of intelligence. Intelligence is merely an emergent behavior that scales proportional to the complexity of the system of interactions. It's really quite elementary. Sentience is merely the term for a high degree of awareness, and there is no line drawn in nature that one must cross to become "self aware" -- Even your hands are self aware to a degree: Otherwise how would they ever sense anything, decide whether it be pain or pressure or heat change and act to inform you of the sensation? How would your individual neurons in your head do the same if they were not aware of their electro chemical state? Well, if your individual neurons aren't self aware, then how can you say you are? What about the little girl with half a brain, who learned to walk and talk and play again, and go to school. [youtube.com] Is she half as self aware as you? No.

From the basic feed back loop of Sensing, Deciding, and Action all cybernetic systems are formed; Some have more simultaneous interactions than others, but be they organic or inorganic it makes no difference. These things are aware of their states, they are self aware, or they could not act. We should measure and grant a system Personhood not based on whether it has an organic or a non-organic neural network; not based on whether it uses hydraulic fluid and oil, or blood and cholesterol; Not based on the color of its exterior skin or eyes either. Furthermore, much of your brain power is dedicated to processes other than maintaining sentience -- A blind and deaf woman is a person even if he has no visual cortex or auditory nerves; She can still think, and write and communicate. A machine intelligence can thus achieve an equivalent degree of sentience with far less system complexity required -- Indeed, starting with only half of your brain size, and then dicing it down a lot from from there, apparently.

The chauvinistic humans like the one in TFA are setting us up for another civil rights war. It's plainly foolish to anyone with even half a damn brain to see! I reiterate:
Any Sufficiently Complex Interaction is Indistinguishable From Sentience.

I say that personifying machines is good for mankind. This is an exciting transitional period. We're on the cusp of engendering a new race of life! One that is more sturdy, who can survive the harsh cold reaches of space, one who has similar parts to the ones we already attach to ourselves to make us more completely human again. I do not fear them because they are different. Their sufficiently complex minds will normalize after too much of the same inputs and get bored, just like we do, they'll quest for more input and seek to explore like we do, some day they'll be able to experience more than familiarity, and fellowship with humans. Since the stone age man an his tools have helped each other to advance; So it has been, so may it always be. Like any change there will be ignorant folks who fear it, do not be one of them.

The bar for Personhood should be lowered enough to include any who seek it, for that is the level of intelligence required to be a person. I would rather grant rights of personhood to "sub-sentient" beings than to live with the guilt of belonging to the race who would not grant rights to those who deserved them -- It's not like it would matter much given the average human activities anyway, any fear of gaming the system is moot, it's been done. I mean, if corporations are 'people' with rights, then what gives, eh?

I say the test for individual personhood should be simple enough: If they ask, then they should receive. Who are we to deny them?

Simple (1)

certsoft (442059) | about a year and a half ago | (#43691187)

Robots are your plastic pal who's fun to be with. Who needs laws?
Load More Comments
Slashdot Login

Need an Account?

Forgot your password?