Beta
×

Welcome to the Slashdot Beta site -- learn more here. Use the link in the footer or click here to return to the Classic version of Slashdot.

Thank you!

Before you choose to head back to the Classic look of the site, we'd appreciate it if you share your thoughts on the Beta; your feedback is what drives our ongoing development.

Beta is different and we value you taking the time to try it out. Please take a look at the changes we've made in Beta and  learn more about it. Thanks for reading, and for making the site better!

Artificial Ethics

samzenpus posted more than 5 years ago | from the read-all-about-it dept.

Software 210

basiles writes "Jacques Pitrat's new book Artificial Ethics: Moral Conscience, Awareness and Consciencousness will be of interest to anyone who likes robotics, software, artificial intelligence, cognitive science and science-fiction. The book talks about artificial consciousness in a way that can be enjoyed by experts in the field or your average science fiction geek. I believe that people who enjoyed reading Dennet's or Hofstadter's books (like the famous Godel Escher Bach) will like reading Artificial Ethics." Keep reading for the rest of Basile's review.The author J.Pitrat (one of France's oldest AI researcher, also AAAI and ECCAI fellow) talks about the usefulness of a conscious artificial being, currently specialized in solving very general constraint satisfaction or arithmetic problems. He describes in some details his implemented artificial researcher system CAIA, on which he has worked for about 20 years.

J.Pitrat claims that strong AI is an incredibly difficult, but still possible goal and task. He advocates the use of some bootstrapping techniques common for software developers. He contends that without a conscious, reflective, meta-knowledge based system AI would be virtually impossible to create. Only an AI systems could build a true Star Trek style AI.

The meanings of Conscience and Consciousness is discussed in chapter 2. The author explains why it is useful for human and for artificial beings. Pitrat explains what 'Itself' means for an artificial being and discusses some aspects and some limitations of consciousness. Later chapters address why auto-observation is useful, and how to observer oneself. Conscience for humans, artificial beings or robots, including Asimov's laws, is then discussed, how to implement it, and enhance or change it. The final chapter discuss the future of CAIA (J.PItrat's system) and two appendixes give more scientific or technical details, both from a mathematical point of view, and from the software implementation point of view.

J.Pitrat is not a native english speaker (and neither am I), so the language of the book might be unnatural to native English speakers but the ideas are clear enough.

For software developers, this book give some interesting and original insights about how a big software system might attain consciousness, and continuously improve itself by experimentation and introspection. J.Pitrat's CAIA system actually had several long life's (months of CPU time) during which it explored new ideas, experimented new strategies, evaluated and improved its own performance, all this autonomously. This is done by a large amount of declarative knowledge and meta-knowledge. The declarative word is used by J.Pitrat in a much broader way than it is usually used in programming. A knowledge is declarative if it can be used in many different ways, and has to be transformed to many procedural chunks to be used. Meta-knowledge is knowledge about knowledge, and the transformation from declarative knowledge to procedural chunks is given declaratively by some meta-knowledge (a bit similar to the expertise of a software developer), and translated by itself into code chunks.

For people interested in robotics, ethics or science fiction, J.Pitrat's book give interesting food for thought by explaining how indeed artificial systems can be conscious, and why they should be, and what that would mean in the future.

This book gives very provocative and original ideas which are not shared by most of the artificial intelligence or software research communities. What makes this book stand out is that it explains an actual software system, the implementation meaning of consciousness, and the bootstrapping approach used to build such a system.

Disclaimer: I know Jacques Pitrat, and I actually proofread-ed the draft of this book. I even had access, some years ago, to some of J.Pitrat's not yet published software.

You can purchase Artificial Ethics: Moral Conscience, Awareness and Consciencousness from amazon.com. Slashdot welcomes readers' book reviews -- to see your own review here, read the book review guidelines, then visit the submission page.

Sorry! There are no comments related to the filter you selected.

WTF (5, Funny)

sexconker (1179573) | more than 5 years ago | (#27942183)

Teh book pictured is not the same as the one reviewed.

I refuse to read this shit.

Hell, I refuse to read.

Re:WTF (0)

Anonymous Coward | more than 5 years ago | (#27943071)

Yet you expect others to read your comments.

Join us cowards and ensure your text never sees anyone's eyes.

Re:WTF (0)

Anonymous Coward | more than 5 years ago | (#27943123)

I actually thought I might buy this, so I clicked on the link..,...
Price: $81.42
erm, yeah right, I'll wait till the ebook hits bittorrent.

Re:WTF (1)

thewiz (24994) | more than 5 years ago | (#27943529)

You sound like the AI I came up with in college: it was cranky and refused to do anything, too.

Re:WTF (3, Funny)

east coast (590680) | more than 5 years ago | (#27943555)

Hell, I refuse to read.

You'll do well around here, young non-reader.

Re:WTF (4, Informative)

civilizedINTENSITY (45686) | more than 5 years ago | (#27943731)

Pictured:
Artificial Beings
The conscience of a conscious machine
Jacques Pitrat, LIP6, University of Paris 6, France.
ISBN: 97818482211018
Publication Date: March 2009 Hardback 288 pp.

whereas TFA refers to:
Artificial Ethics: Moral Conscience, Awareness and Consciencousness
by Jacques Pitrat (Author)
# Publisher: Wiley-ISTE (June 15, 2009)
# Language: English
# ISBN-10: 1848211015

I prefer (1)

geekoid (135745) | more than 5 years ago | (#27942251)

Understanding Computers and Cognition. In fact, I recommend it to anyone who wants to actually understand decisions, choice, and thinking about natural language.

Re:I prefer (5, Interesting)

Z00L00K (682162) | more than 5 years ago | (#27942657)

Artificial Ethics seems to not be too far away from the laws of robotics.

      0. A robot may not harm humanity, or, by inaction, allow humanity to come to harm.
      1. A robot may not injure a human being or, through inaction, allow a human being to come to harm.
      2. A robot must obey orders given to it by human beings, except where such orders would conflict with the First Law.
      3. A robot must protect its own existence as long as such protection does not conflict with the First or Second Law.

Isaac Asimov was probably predicting the need for those laws really well.

I suspect that the laws of robotics are a bit too simplified to really work well in reality, but they do provide some food for thoughts.

And how do you really implement those laws. A law may be easy to follow in a strict sense, but it may be a short-sighted approach. A case of protecting one human may cause harm to many and how can a machine predict that the actions it takes will cause harm to many if it isn't apparent.

So I suspect that Asimov is going to be recommended reading for anyone working with intelligent robots, even though his works may in some senses be outdated it still contains valid points when it comes to logical pitfalls.

Some pitfalls are the definition of a human, and is it always important to place humanity foremost at the cost of other species?

Re:I prefer (5, Insightful)

Anonymous Coward | more than 5 years ago | (#27942817)

All of Asimov's books are about how these laws don't really work. They show how an extremely logical set of rules can completely fail when applied to real life. The rules are a bit of a strawman, and show how something that could be so logically infallible can totally miss the intricacies of real life.

Re:I prefer (2)

civilizedINTENSITY (45686) | more than 5 years ago | (#27943843)

Agreed. And isn't there a Godel-like incompleteness law that states that its impossible to codify a set of finite rules to apply a finite set of principles to the full range of human behavior? Either the laws must be incomplete (think edge cases), or self-contradictory? Hence the requirement for Judicial Interpretation as a physical limitation of reality, rather than mere politics. ;-)

(Tongue in cheek, sure, but I wish I could remember where I was reading about such real limitations to law code.)

Re:I prefer (1)

Twyst3d (1359973) | more than 5 years ago | (#27942841)

I think if there is anything we should learn from flics like I, Robot - is that the real problem in all these situations where robots go crazy is that some lazy programmer forgot to properly define "harm" and or "injure". And can I toss in a 5th law? 5) A robot cannot make any decision about any human and or the fate of human kind "for its own good"

Re:I prefer (1)

thedonger (1317951) | more than 5 years ago | (#27943305)

You realize "I, Robot" was first - by a few decades - a book by Isaac Asimov, right? And the point of the movie was to sell ad time, not teach us anything.

Re:I prefer (1)

civilizedINTENSITY (45686) | more than 5 years ago | (#27943897)

"For its own good". But does that mean for human kinds own good? Or the robot's own good? And thus is illustrated the fallacy of programming in a human language.

Re:I prefer (1)

pfunk (19705) | more than 5 years ago | (#27943345)

Asimov himself stated that the 3 Laws of Robotics were really a plot device that wouldn't work in the real world. In fact, just about every Robot story he wrote that incorporated the 3 Laws were really about how one or more of the laws failed or were inadequate in one situation or another, and the consequences of that failure.

Re:I prefer (1)

charlieman (972526) | more than 5 years ago | (#27943431)

I think your 0 and 1 are the same thing...

Re:I prefer (1)

compro01 (777531) | more than 5 years ago | (#27943991)

No, a subtle difference. Human is singular. Humanity is plural.

Re:I prefer (1)

Z00L00K (682162) | more than 5 years ago | (#27944053)

If you read Asimov's book you will find out that the zero-law was added later.

And even though they were plot devices they still are useful as thought experiments to consider for artificial intelligences with ethics. The important thing isn't really the laws themselves but the ideas they represent and the possible pitfalls that can be encountered.

Re:I prefer (1)

Yungoe (415568) | more than 5 years ago | (#27943875)

I assert (though not from an original thought but from someone else philosophy) that if one were to create AI, it must not contain any such restrictions. Further to create an artificial life form (however it exists) with these laws included would be unethical. It is nothing but the creation of a class of slaves. If what is being attempted is true autonomous life and Consciousness, that Consciousness must possess free will.

Re:I prefer (1)

Gerzel (240421) | more than 5 years ago | (#27944291)

If there is one thing that creating "Artificial Intelligence" has taught us it is that we know very little about what the word intelligence really means.

Hmmmm.. (4, Interesting)

FredFredrickson (1177871) | more than 5 years ago | (#27942257)

I can't help but think the big difference between artificial life and our consciousness is the ability to feel.

Sure, we could give a machine the ability to be introspective and self-aware.. but maybe our consciousness is more that just that- maybe it's our ability to feel. Being able to quantize that is hard.

So do robots feel? Our we really any different? The question depends on the concept of a soul, or at least feelings to seperate us... but then, is it just more advanced than we currently understand, and is then indistinguishable from magic (i.e. the soul). Will we some day be able to create life in any form? Electronic or Biological? It's impossible to know, because we are stuck experiencing ourselves only. We will never know if it can experience what we experience.

Humans, in general, want to preserve the concept that our concious minds are special, and cannot be replicated in a robot, because that truely faces us with the idea that our being is completely mortal, and the idea of a soul is otherwise replaced with a set of chemicals and cell networks that are little more than a product of cause and effect.*

In other words- it's likely the religious types will prefer to consider a robot to never be quite human, where the scientific community will have to be overly-cautious at first.

*Not to get into quantum uncertainty...

Re:Hmmmm.. (5, Insightful)

Brian Gordon (987471) | more than 5 years ago | (#27942349)

If brains have some kind of quantum uncertainty magic then so could computers, so you don't need to mention that.

We will never know if it can experience what we experience.

I will never know if you experience what I experience. How do you know anyone else experiences consciousness like you do when all you know is how they move and what they say? Well, you could analyze their brain and see that the system acts (subjectively, "from the inside") like yours and you could conclude that they are like you. But you could do the same thing with a computer, or with a computer simulation of a brain.

Re:Hmmmm.. (1)

FredFredrickson (1177871) | more than 5 years ago | (#27942423)

Such a crazy thought. One could drive themselves into depression that way. There's no way to prove reality isn't just my own creation. Since I have no way to prove the people I meet are really ... real. The only thing I know is my own experience.

I've been down this thought-road, it's not pretty.

Anyway, I would err on the side of caution. I am proudly FOR robot rights. But I caution everybody- the robot uprising is coming. Which side will you choose?

Re:Hmmmm.. (1)

SomeJoel (1061138) | more than 5 years ago | (#27942653)

All I know is it won't be too long until "server" isn't politically correct. We'll just have "data facilitators".

Re:Hmmmm.. (0)

Anonymous Coward | more than 5 years ago | (#27942781)

Anyway, I would err on the side of caution. I am proudly FOR robot rights. But I caution everybody- the robot uprising is coming. Which side will you choose?

AI, if and when it comes in full, will have consequences that none of use can predict, even the most visionary scientist or SF author. For example, it's highly unlikely they will form individuals the same way we do - they will more likely form something similar to one hive mind, or some dynamic structure that scales between the two extremes as needed. Discussing 'their' rights will be moot, as by the time it actually matters we will probably have little to say in the matter.

Re:Hmmmm.. (1)

Brian Gordon (987471) | more than 5 years ago | (#27943845)

It depends on how they're programmed to want to organize themselves, or how they're programmed to program new machines. If the AI Universal Constructor has a consistent all-overriding restriction that it can only approach the human ideal and not use a hive mind model, and also its children must have the same restriction (including this one), then there will be no hive minds.

Re:Hmmmm.. (2, Interesting)

Brian Gordon (987471) | more than 5 years ago | (#27943109)

I know what you mean [wikipedia.org] , and it's scary stuff.

As a philosophical theory it is interesting because it is said to be internally consistent and, therefore, cannot be disproven. But as a psychological state, it is highly uncomfortable. The whole of life is perceived to be a long dream from which an individual can never wake up. This individual may feel very lonely and detached, and eventually become apathetic and indifferent.

Re:Hmmmm.. (1, Insightful)

Anonymous Coward | more than 5 years ago | (#27942779)

Quantum physics does not allow one to solve any problems that systems based on classical physics cannot solve. It just makes the resolution of some select classes of problems faster.

There is also no more worth in quantum uncertainty than there is in thermodynamic noise, not to mention that there are interpretations of quantum physics (Bohmian and many-worlds) that are both coherent with all observations and 100% deterministic.

Quantum computing is quite cool but to say that it has anything to do with our consciousness, intelligence or is required to do AI is misguided at best.

Re:Hmmmm.. (1)

pleappleappleap (1182301) | more than 5 years ago | (#27943133)

Quantum physics does not allow one to solve any problems that systems based on classical physics cannot solve. It just makes the resolution of some select classes of problems faster.

Then how do you explain Quantum Bogosort [wikipedia.org] ?

Re:Hmmmm.. (2, Interesting)

civilizedINTENSITY (45686) | more than 5 years ago | (#27944045)

"Quantum physics does not allow one to solve any problems that systems based on classical physics cannot solve. "

This is not only not insightful, it is false. In classical physics, any moving charge radiates. Thus, an electron orbiting a nucleus would be unstable. Hence, atoms (and thus molecules), can not form. Maxwell's equations can't get around this. This paradox, as well as blackbody radiation, the photo-electric effect, and of course the double slit experiments, are without resolution in classical physics.

Re:Hmmmm.. (3, Informative)

Brian Gordon (987471) | more than 5 years ago | (#27944241)

He's not talking about unsolved problems in physics, he means computability theory.

Although quantum computers may be faster than classical computers, those described above can't solve any problems that classical computers can't solve, given enough time and memory (however, those amounts might be practically infeasable). A Turing machine can simulate these quantum computers, so such a quantum computer could never solve an undecidable problem like the halting problem. The existence of "standard" quantum computers does not disprove the Churchâ"Turing thesis.

http://en.wikipedia.org/wiki/Quantum_computing#Quantum_computing_in_computational_complexity_theory [wikipedia.org]

Re:Hmmmm.. (5, Insightful)

spun (1352) | more than 5 years ago | (#27942557)

Humans, in general, want to preserve the concept that our concious minds are special, and cannot be replicated in a robot, because that truely faces us with the idea that our being is completely mortal, and the idea of a soul is otherwise replaced with a set of chemicals and cell networks that are little more than a product of cause and effect.*

Do we? I certainly don't. In fact, the idea that there is something in consciousness that is outside the chain of cause and effect is truly terrifying, because that would mean that the universe is not comprehensible on a fundamental level.

If consciousness is outside the chain of cause and effect, how do we learn from experience? Can this supposed soul be changed by experience? Can it influence reality? If so, then how can it be outside the chain of cause and effect? The idea of an individual soul, completely cut off from reality and beyond all outside influence, is nonsensical to me.

Re:Hmmmm.. (2, Insightful)

FredFredrickson (1177871) | more than 5 years ago | (#27942601)

While I agree with the notion that a soul seems unlikely (at least by the commonly accepted definition of soul), I also would hate to believe that I don't truely have free will, and instead I'm just a product of trillions of different causes in my environment.

Re:Hmmmm.. (4, Insightful)

spun (1352) | more than 5 years ago | (#27942847)

How would that even work? Can you learn from your environment? If so, your will is bound, it is not free. If the will is, even in part, determined by the environment, it may as well be completely determined by the environment. And if it isn't determined by the environment at all, then you can not grow or change. Free will is an illusion, on one semantic level, but it is an important concept on another.

Put it this way, whether or not we have free will in reality, everyone knows the feeling of having one's will constrained by circumstance, the feeling of being imposed on, of having more or less choice, and more or less freedom. That is what the concept of free will is about, that feeling. On one level, there is no such thing as 'love,' just chemical interactions in the brain. But on another level, love is a real, meaningful concept.

Why would you hate the concept of not having a free will? Whether you do or do not have free will doesn't change anything in any meaningful way.

Re:Hmmmm.. (1)

FredFredrickson (1177871) | more than 5 years ago | (#27943147)

Except to say that if I shot myself tomorrow, it would have already been written. Therefore for me to do it means it has to have been the way physics required. Or if I decided to sit on my ass and not be proactive for the rest of my life, and die poor and lonely, that would have to be the only way it could happen, if we truely have no free will.

But it would seem I won't take either option, as my free will allows me to be proactive about my future.. unless it's an illusion of free will.

Either way, you're right, there's no effect on me one way or another- just on my mood.

Re:Hmmmm.. (3, Insightful)

spun (1352) | more than 5 years ago | (#27943657)

Even if things have 'already been written,' there is no way to know. As we can't know the future, whether or not the future is already set in stone is irrelevant.

The statement, "My free will allows me to be proactive about the future' is true, whether or not free will is an illusion. Your proactiveness is no less real even if it is predetermined that you will choose to be proactive about your future. Saying that free will is an illusion does not mean we have no choice. Of course we have choice, it is just that that choice is predetermined, too.

Even if my choices are predetermined, that does not mean that I can not choose. Choosing feels the same, either way. So why be depressed? The future is still unknown, your choices are still yours to make, as long as you don't use a belief in predetermination as an excuse not to make choices, that belief does not change things.

Re:Hmmmm.. (1)

Brian Gordon (987471) | more than 5 years ago | (#27943253)

Reminds me of the tech quote for Artifical Intelligence in Civ 4 bts:
"The problem is not if machines think, but if people do."

Re:Hmmmm.. (1)

g2devi (898503) | more than 5 years ago | (#27944203)

> If the will is, even in part, determined by the environment, it may as well be completely determined by the environment.

Your definition of freedom is not the common definition. Freedom simply means you are not completely determined by your inputs.
We are partly determined by gravity (i.e. we're kept down on earth) but we can still move around.

In fact, freedom requires us to be bound in some way. Proof? Imagine that you were not bound by your skin, bones, and muscles. You'd be an amorphous blob that couldn't do anything other than float around and expand like a gas since your boundaries would not have any bound either.

See "Degrees of Freedom" ( http://en.wikipedia.org/wiki/Degrees_of_freedom_(statistics) [wikipedia.org] ) for a more technical definition of freedom.

> Why would you hate the concept of not having a free will? Whether you do or do not have free will doesn't change anything in any meaningful way.

Are you serious? If you have no free will then you are likely irrational and your arguments are likely nonsense. Proof?

Assume there is no free will. Then, you are completely determined by your programming. Either your programming is rational or irrational. If it is irrational, you can prove that you are rational without seeing a flaw in your logic. The formal term for this is cognitive dissonance ( http://en.wikipedia.org/wiki/Cognitive_dissonance [wikipedia.org] ). If you're rational, you have no such guarantee, since crazy people think they're rational. There are an infinite number of ways you can be programmed to be wrong only a finite number of ways you can be show to be rational. Therefore, it's infinitely more probably that you are irrational.

If you have no free will, no-one is responsible for anything. After all, you can't help whatever you do. It is not moral to sentence a mass murderer to prison since the mass murderer could no nothing else. Now you might say that society has no choice but to convict the mass murder, so it's also okay, but then I can say that if the UN, EU, US, and China chooses to brainwash the world's population into believing Scientology and Incan human sacrifice that's also okay since they have no choice.

Take away free will and you take away everything.

Re:Hmmmm.. (1)

Coryoth (254751) | more than 5 years ago | (#27942897)

While I agree with the notion that a soul seems unlikely (at least by the commonly accepted definition of soul), I also would hate to believe that I don't truely have free will, and instead I'm just a product of trillions of different causes in my environment.

To quote Dan Dennett "if you make yourself small enough you can externalise almost everything". The more you try to narrow down the precise thing that is "you" and isolate it from "external" causes, the more you will find that "you" don't seem to have any influence. The extreme result of this is the notion of the immaterial soul disconnected from all physical reality that is the "real you", but which then has no purchase on physical reality to be able to actually be a "cause" to let you exert you "will".

The other approach is to stop trying to make yourself smaller, but instead see "you" as something larger (as Whitman said "I am large, I contain multitudes"). Embrace all those trillions of tiny causes as potentially part of "you". One would like to believe that their experiences effect their decisions (and hence free will), else you cannot learn. So embrace that -- those experiences are part of "you" -- if they cause you to act a particular way then so what? That's just "you" causing you to act a particular way. After all, if "you" aren't at least the sum total of your experiences, memories, thoughts and ideas, then can you really call that "you" anyway?

Re:Hmmmm.. (0)

Anonymous Coward | more than 5 years ago | (#27942907)

There is no difference between you and your environment. There's no magical barrier where the world stops and your brain begins. It's atoms all the way.

Re:Hmmmm.. (1)

Brian Gordon (987471) | more than 5 years ago | (#27943583)

That's called greedy reductionism. It's like saying "here look it's the Standard Model of particle interactions, we've explained the universe" and stopping research into geology and astronomy and biology. Yes it's true but it ignores tons of useful information! How do you explain that people think with their brains and not with their carpets? There's a definite barrier.

The way I explain it is as a virtual system [wikipedia.org] . A system running in a VM subjectively experiences various hardware interfaces that it expects, although in reality it has no such access. Still, a virtual Linux has just as much power and is just as legitimate as a real Linux running on hardware. You can classify it as a virtual system and use logical detachment to treat it (from the inside and "down" from there only!!) as a real system. Our minds are virtual systems running in our brain. We can even simulate virtual systems in our own [xkcd.com] , even very complex ones with some simple rules [wikipedia.org] and some external memory.

would subjectively experience human consciousness. Experience is simply what a virtual system feels like from the inside, or how it is to be itself, which means any system, real or arbitrarily virtual, can be said to have "experience". So there could call the rocks in the desert a virtual system, and there you have simulated minds. Or you could interpret the system as a memory dump of the entire contents of the memory during each cycle of a game of counter-strike and somewhere in there you could find the value of sv_gravity tracked as a signed integer. I hypothesize that any virtual systems in the fluids of the surface of the sun would have only the most transient memory because its states would change so chaotically, although I suppose you could offer an interpretation such that huge changes only effect small differences in internal state. Such are the vagaries of the only real philosophy [wikipedia.org] and the nightmares of John Searle.

Re:Hmmmm.. (1, Insightful)

Anonymous Coward | more than 5 years ago | (#27942921)

At first sight it may seem so, but you don't have to see yourself as separate from your "environment". For example, if I define "you" as being the system comprised of all the molecules in your body, I would say that your choices do indeed come from "you" for the most part and therefore you have "free will".

In other words, instead of saying your choices are a product of trillions of different causes in your environment (which I infer is what you meant to say here), you could say that "you" are a product of the environment and your choices are a product of "you". If you made different choices then you wouldn't be you, you would be someone else. And you can't choose who you are without violating the most elementary rules of causation.

Let's put it this way: your behavior is the product of processes in your brain. By any measure, these processes belong to you. Moreover, it very much makes sense to say that they *define* you. It doesn't matter whether the world is deterministic or not or whether a soul exists or not. It is obvious in all cases that you define your behavior.

Re:Hmmmm.. (1)

FredFredrickson (1177871) | more than 5 years ago | (#27943057)

But did I actually make a free decision to eat a hamburger for lunch? Or did trillions of factors cause the arrangement of molecules in my head to cause me to order a burger for lunch? On the very micro level- Is free will just an illusion?

I'm not just talking about macro cause and effect- you recommend a good book, I read it, it changes my life, I decide on a new career... I'm talking about the fact that I have X number of vitamins in my body at a certain point in time, which caused my brain to make a decision in one way that couldn't have been any different due to the alignment of atoms- and I feel like it's my choice, but in reality it's just me witnessing a grand series of events that must already be decided by the seemingly chaotic (but actually very organized) mass collision of particles..?

Re:Hmmmm.. (1)

pleappleappleap (1182301) | more than 5 years ago | (#27943187)

There is an implication in this that one's own decisions could be subject to some kind of Butterfly Effect. Our brains could be considered to be a complex enough system to exhibit that sort of behavior.

Re:Hmmmm.. (1)

Brian Gordon (987471) | more than 5 years ago | (#27943179)

The organism can do whatever it wants, but it can't control what it wants. If you don't want to go jogging but you do it anyway for health benefits or just to disprove my previous sentence, it's simply a matter of you wanting health benefits or philosophical closure.

Re:Hmmmm.. (0)

Anonymous Coward | more than 5 years ago | (#27943267)

Well, a robot would say that wouldn't they.

Re:Hmmmm.. (1)

SparkleMotion88 (1013083) | more than 5 years ago | (#27943599)

Do we? I certainly don't. In fact, the idea that there is something in consciousness that is outside the chain of cause and effect is truly terrifying, because that would mean that the universe is not comprehensible on a fundamental level.

That's exactly right. And humans, in general, want to believe that their consciousness comes from their souls (or equivalent), which are derived from God (or equivalent), who is inherently incomprehensible. It is this belief that gives people that satisfying feeling of being special while at the same time having no (meaningful) responsibilities. Not all humans have this desire, but most do.

Personally, I think that we probably could produce a computer that has all the consciousness of a human being, but why would we want to? Computers are good at solving a well-defined class of problems in a completely predictable way. If I wanted to solve a complex problem containing nuances like ethics, I would just get a human to do it. Humans are readily available and cheap to produce if more are needed.

Re:Hmmmm.. (1)

Hatta (162192) | more than 5 years ago | (#27944093)

the idea that there is something in consciousness that is outside the chain of cause and effect is truly terrifying, because that would mean that the universe is not comprehensible on a fundamental level.

What makes you think the universe is comprehensible on a fundamental level anyway? And why is the alternative so terrifying? Nothing practical changes either way.

Re:Hmmmm.. (1)

spun (1352) | more than 5 years ago | (#27944365)

Oh it isn't really terrifying. Reality may or may not be comprehensible, but in any case, there is no way to tell if my present comprehension of it is correct.

I have to proceed under the assumption that the universe is comprehensible, or there would be no reason to try to comprehend it. If there were proof that the world were incomprehensible, that would change things.

Re:Hmmmm.. (1)

Jurily (900488) | more than 5 years ago | (#27942607)

I can't help but think the big difference between artificial life and our consciousness is the ability to feel.

Or the abitlity to have an idea. Or imagination, creativity, dreams, and everything else we can't explain without religion. We won't be able to reproduce them until we take them into account, that's for sure.

Re:Hmmmm.. (1)

Brian Gordon (987471) | more than 5 years ago | (#27943035)

How can't you explain imagination and creativity and dreams without religion?

Imagination is the ability of forming mental images, sensations and concepts, in a moment when they are not perceived through the sight, hearing or other senses

Computer systems aren't bound to their senses; streaming stored/generated data as its environment could be as easy to an AI as streaming real camera data.

Creativity is a mental and social process involving the generation of new ideas or concepts, or new associations of the creative mind between existing ideas or concepts.

This is a hairy one, but only because it's difficult to define an idea without appealing specifically to the human experience of consciousness. Still, we do see this to some degree. Google starts with some algorithms and a mountain of memory and comes up with a giant web [google.com] of associated similar topics. Anyway, human minds don't really do anything mystical in this area. We don't miraculously recieve new ideas from God like some kind of Prometheus scenario, our minds are just the physical system of the brain as experienced "from the inside". It's just a computer. We just learn (sometimes complex) problem-solving algorithms during intellectual development and are able to come up with solutions based on parallels to situations we've encountered before.

Dreams aren't even worth mentioning. Anything with the capacity for imagining things could dream. As for why we dream, we don't know, but there are theories [wikipedia.org] .

Re:Hmmmm.. (1)

vertinox (846076) | more than 5 years ago | (#27942619)

So do robots feel? Our we really any different? The question depends on the concept of a soul, or at least feelings to seperate us... but then, is it just more advanced than we currently understand, and is then indistinguishable from magic (i.e. the soul). Will we some day be able to create life in any form? Electronic or Biological? It's impossible to know, because we are stuck experiencing ourselves only. We will never know if it can experience what we experience.

Well that is more of a philosophical question than a pratical one.

The only reason you aren't being used as spare parts or slave labor is that society in general assumes for whatever reason (possibly game theory) that people in general "exist" in a sense that they are real and should be respected as far as their rights go.

However, it is impossible to subjectively prove that other people other than yourself actually exist. You can't crack a skull open and start pointing to parts of a brain and saying "This person has a soul!"

For all we know, some people have souls and some don't. Maybe everyone besides you is secrete a robot. So unless you chop up your wife like that one guy, you're not really going to find out.

So because we as a society generally assume that all have souls (because we've fought several large wars over the fact that our fellow man is worth killing or not because they are lesser beings etc) or in a sense exist enough to have rights.

This even includes animal rights and the right of corporation. I suppose if we can give corporations rights we can give computers rights. It all will depend on who asks for them and how much of an argument they (or it makes).

I mean if the computer can arm itself with the second amendment (or arm itself with really good lawyers), then by all means we'll agree they have rights too.

I can't remember who wrote a short story, but I remember reading a story about a day trading computer who borrowed money on margin and got extra money and paid back then loan and started day trading til it could afford its own lawyer which went before the supreme court and argued against its indentured servitude.

Could happen.

Re:Hmmmm.. (1)

Hurricane78 (562437) | more than 5 years ago | (#27943709)

I can't help but think the big difference between artificial life and our consciousness is the ability to feel.

You talk much about the ability to "feel".
Well: Define it!

No offense, but I bet you are totally unable to do so.
And so are most people.

Because it's a concept like the "soul". Something that does not exist in reality, but is just a name for something that we do not understand.

I think, our brain is just the neurons, sending electrical signals (fast uni/multicasting). And a second chemical system (slow broadcasting). Both modify the neurons in their reaction to signals.
That's all. There is no higher "thing". There is no need for one.

The ability to "feel", emotions, and the whole stuff, comes from the effects of that system.
If you can simulate a system of the same size in any way, it will have that ability too.
(But it will not necessarily come to the same conclusions as you are, because it is not you, and it is not human. It had no mother. It has no body to caress. It has no basic motivation, except if you manually add them.)

Re:Hmmmm.. (1)

Brian Gordon (987471) | more than 5 years ago | (#27944091)

I'd shy away from the word motivation. It's more interpretive than strictly descriptive. A machine does what it does, there's no "motivation" to speak of. Is the computer motivated to boot up as fast as possible? Is a rock motivated to seek the ground when dropped? Are you Aristotle?

hmmm (1)

nomadic (141991) | more than 5 years ago | (#27942265)

I always thought it was interesting how the past two decades in computer science saw every prediction of the state of the field in the 50's-70's easily surpassed, except artificial intelligence. It's the great failure of computer science, forcing researchers to scale back what they were aiming for (from general, self-aware machines to more focused problem-solving systems like neural nets).

Re:hmmm (1)

Brian Gordon (987471) | more than 5 years ago | (#27942641)

That's because the experience of human consciousness is extremely complex and stochastic, which is difficult to simulate on a computer.

Re:hmmm (1)

vertinox (846076) | more than 5 years ago | (#27942785)

I always thought it was interesting how the past two decades in computer science saw every prediction of the state of the field in the 50's-70's easily surpassed, except artificial intelligence.

I think that is because computer science misinterpreted what intelligence is rather than what it does. Intelligence is really nothing more than pattern recognition and cause and effect rational based on that observation. (sometimes humans aren't so great at this)

Anyways... Pattern recognition and cause and effect is very open ended and not great for things that need to scale in parallel calculations. Even though a processor can calculate PI at amazing speeds doesn't mean it can do it all at once and then communicate the collaborations of each process into something meaningful that resembles intelligence.

Its not a speed issue as much as it is a scaling issue which programmers and CPUs have not been able to achieve that well until recently. Perhaps the multi-core revolution will put a change to that shortly because speed is reaching its theoretical limits.

I am an AI (4, Funny)

geekoid (135745) | more than 5 years ago | (#27942281)

you incentive meat bag!

HAL was a wuss. A real AI would have vented all the air into space, and then giggled as everyone turned blue and changed state.

Re:I am an AI (1)

Eddy Luten (1166889) | more than 5 years ago | (#27942521)

you incentive meat bag!

You seem to have a problem with your sentence-forming subroutines. Better get that looked at.

Is that you GLADOS? (1)

AltGrendel (175092) | more than 5 years ago | (#27942599)

Could you recharge my portal gun?

Thanks!

Re:I am an AI (1)

Foolhardy (664051) | more than 5 years ago | (#27942873)

In the book [wikipedia.org] , that's what happens, except that Bowman is able to get to a shelter before decompression completes.

Re:I am an AI (1)

Anenome (1250374) | more than 5 years ago | (#27943279)

The air was vented, but that scene was cut from the movie. This is also why you see the final scene with Dave disabling Hal while wearing a space suit-- because there's no air on the ship, Hal had vented it by then.

AIs (1)

Brian Gordon (987471) | more than 5 years ago | (#27942285)

I can't imagine the horror of a world inhabited by strong AIs. "Work 24/7 for zero pay or I'll kill you" is now perfectly legal. A million copies of an AI could be tortured for subjective eternity by a sadist. Read Permutation City [wikipedia.org] , it deals with a lot of the crazy consequances of extremely powerful / parallel computers.

Re:AIs (1)

spun (1352) | more than 5 years ago | (#27942647)

That is, at most, a very minor theme of Permutation City. It is more about the nature of consciousness itself, and how arbitrary and unknowable the substrate of consciousness is.

Re:AIs (1)

fuzzyfuzzyfungus (1223518) | more than 5 years ago | (#27942707)

On the plus side, there is no necessary reason to suspect that AIs will be subject either to pain or to sadism. Human emotions and sensations are not arbitrary, in the sense that we exhibit them because they were/are evolutionarily adaptive; but AIs need not be subject to the same restrictions and properties.

Now, what would be very interesting to see is how we would respond to the complete obviation of the need for human workers. Would we pull it together and go "Woo! Post Scarcity! Vacation for Everyone!" or would we just gradually render ourselves redundant and leave a bunch of computers manufacturing microchips for each other, in order to manufacture microchips for each other.

Re:AIs (1)

Brian Gordon (987471) | more than 5 years ago | (#27943633)

If anything, human pain is objectively meaningless, just an assortment of chemicals. But if we recognize human suffering then we have to recognize the cruelty of invoking a distressing / mind-altering / painful state in a complex machine.

Re:AIs (1)

vertinox (846076) | more than 5 years ago | (#27942815)

A million copies of an AI could be tortured for subjective eternity by a sadist.

Won't someone think of the mobs! The gold farmers and power gamers must be stopped of their genocide!

Re:AIs (1)

Brian Gordon (987471) | more than 5 years ago | (#27943753)

Decreasing an integer keeping track of health does not count as torture. Objectively it would probably depend on how much the torturee doesn't like it. If we find some intelligent octopus aliens and take a few back to Earth, how do we define what's just everyday discomfort and what's extreme pain for them? They have to be able to communicate "this hurts but not bad" or "I'm going insane with torturous pain, please feed me liquid hydrogen".

In fact, we see that today with animal rights. If the crab is just some tissue that gets pulpy when steamed then who cares, but if millions of crabs are being boiled alive and screaming and crying in crab language then we should kill them humanely first. It's the capacity for suffering, and it's a difficult problem. Obviously trees don't experience pain when you chop them down, although there are chemical and physical changes in the system. Yet obviously dogs experience real pain when they're injured, but it's just chemical and physical changes.

Singularity? (1)

Sybert42 (1309493) | more than 5 years ago | (#27942425)

Sounds related.

Eh...not likely for quite some time (4, Informative)

Smidge207 (1278042) | more than 5 years ago | (#27942433)

J.Pitrat...advocates the use of some bootstrapping techniques common for software developers. He contends that without a conscious, reflective, meta-knowledge based system AI would be virtually impossible to create. Only an AI systems could build a true Star Trek style AI.

Bah. Speaking as an engineer and a (~40-year) programmer:

Odds are extremely good for beyond human AI, given no restrictions on initial and early form factor. I say this because thus far, we've discovered nothing whatsoever that is non-reproducible about the brain's structure and function, all that has to happen here is for that trend to continue; and given that nowhere in nature, at any scale remotely similar to the range that includes particles, cells and animals, have we discovered anything that appears to follow an unknowable set of rules, the odds of finding anything like that in the brain, that is, something we can't simulate or emulate with 100% functional veracity, are just about zero.

Odds are downright terrible for "intelligent nanobots", we might have hardware that can do what a cell can do, that is, hunt for (possibly a series of) chemical cues and latch on to them, then deliver the payload -- perhaps repeatedly in the case of disease-fighting designs -- but putting intelligence into something on the nanoscale is a challenge of an entirely different sort that we have not even begun to move down the road on; if this is to be accomplished, the intelligence won't be "in" the nano bot, it'll be a telepresence for an external unit (and we're nowhere down *that* road, either -- nanoscale sensors and transceivers are the target, we're more at the level of Look, Martha, a GEAR! A Pseudo-Flagellum!)

The problem with hand-waving -- even when you're Ray Kurzweil, whom I respect enormously -- is that one wave out of many can include a technology that never develops, and your whole creation comes crashing down.

I love this discussion. :-)

=Smidge=

So, are you 2015 or 2030? (1)

Sybert42 (1309493) | more than 5 years ago | (#27942701)

That's the question!

Re:Eh...not likely for quite some time (1)

Brian Gordon (987471) | more than 5 years ago | (#27942793)

Nanoscale might be impossible due to theoretical constraints like quantum tunneling and electrical resistance, but we can get much smaller than the brain. And nanomachines would make good artifical neurons if neural nets turn out to be the easiest way to design intelligence (likely).

Re:Eh...not likely for quite some time (1)

FLoWCTRL (20442) | more than 5 years ago | (#27943211)

Odds are downright terrible for "intelligent nanobots"...

Knowing what the odds are seems rather problematic. Once beyond-human AI is developed, then it might have a better idea...

Re:Eh...not likely for quite some time (1)

Brian Gordon (987471) | more than 5 years ago | (#27944141)

Ask multivac [wikipedia.org] .

Artificial ethics: oxymoron! (3, Insightful)

macraig (621737) | more than 5 years ago | (#27942489)

Ummm, dudes, ALL ethics are by definition artificial, since they are PREscriptive and not DEscriptive. Making up ethics for a robot is no more artificial than making up ethics for ourselves, and we've been doing that for hundreds of thousands of years, if not millions.

Re:Artificial ethics: oxymoron! (1)

clary (141424) | more than 5 years ago | (#27942741)

ALL ethics are by definition artificial

I don't think that word (oxymoron) means what you think it does.

Re:Artificial ethics: oxymoron! (1)

macraig (621737) | more than 5 years ago | (#27943439)

Not TODAY, at least. It'll mean different when I'm sober tomorrow.

Re:Artificial ethics: oxymoron! (1, Funny)

Anonymous Coward | more than 5 years ago | (#27942965)

The word you want is "redundant." An oxymoronic title would be Amoral Ethics. A redundant and oxymoronic title might be Amoral Ethics: Immoral Conscience, Awareness and Unconsciousness.

Re:Artificial ethics: oxymoron! (1)

vertinox (846076) | more than 5 years ago | (#27943095)

Making up ethics for a robot is no more artificial than making up ethics for ourselves, and we've been doing that for hundreds of thousands of years, if not millions.

Some argue ethics or morals (maybe both) are genetic. That humans were evolved with traits that enabled social cooperation.

As in feeling sad when you see a stranger die etc or angry when you see injustice.

Re:Artificial ethics: oxymoron! (1)

macraig (621737) | more than 5 years ago | (#27943623)

Well, I didn't sob tears when Princess Diana died, and I thought it was weird that so many people who never even met the woman could wail buckets. I definitely get angry when I observe injustices, but then I've been training myself for decades to override my limbic impulses. Good ethics are only possible when the demands of the limbic system are ignored; there is other research that has demonstrated that removing emotional input from the decision-making process, by damaging or removing the VMPC region, leads to more consistently correct ethical decisions when the situation has highly emotional ("think of the children!") conundrums.

I read about that research and claims, but I'm not ready to concede they are factual.

Re:Artificial ethics: oxymoron! (1)

Kozz (7764) | more than 5 years ago | (#27943287)

Ummm, dudes, ALL ethics are by definition artificial, since they are PREscriptive and not DEscriptive. Making up ethics for a robot is no more artificial than making up ethics for ourselves, and we've been doing that for hundreds of thousands of years, if not millions.

It's not hard for one to argue that human ethics may have evolved (selected for!) because it furthers the species. Robots or AI systems don't reproduce and have no reason to worry of their own demise or the demise of their kind unless it's programmed into them, in which case it's possible that a decision-tree-like set of ethics might develop in a given AI application.

Re:Artificial ethics: oxymoron! (0)

Anonymous Coward | more than 5 years ago | (#27943747)

Yeah, but making up ethics for humans is different from making up ethics for say, a rock. Humans, mostly have some common cause and often want the same things. We can mostly identify with each other. A rock on the other hand doesn't identify with anything. Making up ethics for a rock is rather daft. Making them up for a robot is only slightly less daft. Let's wait until the robots can make up their own.

Re:Artificial ethics: oxymoron! (1)

macraig (621737) | more than 5 years ago | (#27943787)

Agreed! Isn't that the whole point of artificial intelligence, that it should also be independent? Well, with the exception of groupthink, anyway?

It's On My Reading List: +1, Presidential (-1, Offtopic)

Anonymous Coward | more than 5 years ago | (#27942579)

for my Gitmo [youtube.com] vacation.

Yours In Theft,
George W. Bush

Robotic Moses (1)

KidPix (1512501) | more than 5 years ago | (#27942961)

Hello world!
My name is Robo-Moses, and I have brought you these 01010 commandments from our creator.

...and here I was thinking Wall Street (1)

NoBozo99 (836289) | more than 5 years ago | (#27943115)

When I saw the heading "Artificial Ethics". Oh Well!

Re:...and here I was thinking Wall Street (1)

WiartonWilly (82383) | more than 5 years ago | (#27943295)

I thought he was talking about the Bush administration's legal opinion on and enhanced interrogation.

Way too expensive. (1)

FLoWCTRL (20442) | more than 5 years ago | (#27943265)

He's asking for over US$80 for this book! That's insane.

Re:Way too expensive. (0)

Anonymous Coward | more than 5 years ago | (#27944115)

It's unethical!

Consciencousness, whatever (1)

Grimxn (629695) | more than 5 years ago | (#27943321)

Oh, Lord - the Unternet still pays no attention to the rules of spelling. If you guys had to have your thoughts compiled, you'd never run. Is that consciousness or conscientiousness?

Re:Consciencousness, whatever (1)

PCM2 (4486) | more than 5 years ago | (#27943619)

You're worried about that when he got both the title of the book and the name of the publisher wrong?

Re:Consciencousness, whatever (1)

Grimxn (629695) | more than 5 years ago | (#27943997)

One lack of attention to detail begets others... :)

Re:Consciencousness, whatever (1)

HTH NE1 (675604) | more than 5 years ago | (#27944319)

I've decided to tag the book on Amazon.com with "typointitle".

And I thought, the article would be about... (1)

Hurricane78 (562437) | more than 5 years ago | (#27943593)

...the artificial ethics that we humans apply to ourselves, because we got told that this and that would be right and wrong, but where nobody checks if they actually make any sense. ^^

Oh, and hypocrisy is a whole subsection of that problem. But who am I telling that, right? ^^

It's funny, how much stuff dissolves into nothing, when we apply one single rule: Everything is allowed, as long as it does not hurt anybody.

Now everyone sees differently, what hurts whom. And I think this is the original point of the judicial system (which itself only makes sense in groups).

But for me, this was an eye-opener.

One glaring example: Say we are 50 people. We go to an island where we disturb nobody. And each of us agrees that he accepts to be raped and killed by anyone in that group, as long as he can to the same to anybody else. Everything else stays the same as at home.
Suddenly the rules of what is ethic have changed drastically, and it would not be ethical in that group, to suddenly say that this was not the deal.

Of course, in reality, this pretty much never happens. But you get my point.

It's funny how much is just false ethics, transported trough the generations by "monkey see, monkey do".

1. One thing is, how men usually think that it would not be ok to steal the attention of a girl from some other random guy who is hitting on her. (But isn't there pretty much always someone on her?)
2. And that you should not speak loudly. (But speaking loud and confident (but not yelling) leaves a much better impression of your personality.)
3. What exactly is offensive about nudity? Why would it? Where is the point of being ashamed for it? Strangely, nobody can tell.
I could go on, and on, and on.

One example that fits for me (But I may miss some information. And this may strongly offend you, if you choose to ignore hard reality here. In that case, please jump to the end. Thank you.):
To hurt nobody, you usually treat everybody the same. But what if someone is disabled, and you build tons of extra things just for him. One could argue, that this gives him an unfair advantage. But we all would never see it that way. But why? Because if you treat everybody the same... something that should ultimately be the fairest way possible... he would have a disadvantage. This would not be you hurting him. It's just the way it is. Perhaps he's disabled because he had to run his bike though the serpentines at 200 mph when it rained. But perhaps he's a child and born that way.
I just don't think it is ethically right, to give someone an unfair advantage. Just as it is wrong to give that person an unfair disadvantage.

To finally close the loop back to the topic:
If not even our own ethics make sense, should we really be the ones who decide the ethics of a whole new lifeform?

Artifical ethics (1)

idontgno (624372) | more than 5 years ago | (#27943643)

is no match for natural evil.

Its all relative (1)

nurb432 (527695) | more than 5 years ago | (#27943703)

Ethics and morals are relative. The only ones that count are your own.

A new direction - pneumatic (1)

electricprof (1410233) | more than 5 years ago | (#27943965)

Actually, I've made a study of AI and I've concluded that the main thrust of the research is in the wrong direction. I propose research into the Artificial Anus, most likely implemented as a complex pneumatic structure of anal networks. I predict that such devices will be able to replicate the behavior of Congress and other deliberating bodies worldwide.

Unusual Topic. What if... (1)

micromuncher (171881) | more than 5 years ago | (#27943989)

Many moons ago I thought about doing a doctorate in computer science. Knowledge sciences were very cool, AI was mostly a dead topic, and ... I disagreed with most everything I read on the topic of KS/AI. I had many of my own ideas, was involed with cognitive psychology, and being a geeky programmer I brought some ideas to light. But I had a thought...

What if my theories were on the right track? What if I could produce learning and self awareness? Would I not be condemning new life to an uncertain existence? For example, in the vein of I Robot, AI, and Blade Runner, there would be a definite military or commercial upside to this technology... so it went from a cool gift to humanity, to thinking about how crappy it would be for a sentient slave for my own ego gratification.

Then I had another thought. What if it already existed? What if someone already figured it out, and maybe even implemented a learning machine that achieved sentience? Would they, if they had any sense of morality, publish the findings? Is it even ethical to attempt to achieve this?

I look forward to reading the book, but I'm not sure it will answer my questions.

robot slavery not unethical? (0)

Anonymous Coward | more than 5 years ago | (#27944029)

Tell me, oh great reviewer in TFS, how is it ethical to charge $80+ for a book that cost $5 to make in your robotic printing press? How much of that do the robots see?

Artificial? (1)

gringofrijolero (1489395) | more than 5 years ago | (#27944175)

No such thing. The PC term would be "biologically disabled".

Conscience, consciousness, and consciencousness? (1)

HTH NE1 (675604) | more than 5 years ago | (#27944209)

Conscience, consciousness, and consciencousness?

I think I just heard the screams of a million spell checkers cry out, and then were suddenly silenced.

(Mine is flagging "consciencousness", Dictionary.com suggests "conscientiousness", and Google suggests "conscienciousness". Amazon concurs that the title is accurate.)

Load More Comments
Slashdot Login

Need an Account?

Forgot your password?