×

Welcome to the Slashdot Beta site -- learn more here. Use the link in the footer or click here to return to the Classic version of Slashdot.

Thank you!

Before you choose to head back to the Classic look of the site, we'd appreciate it if you share your thoughts on the Beta; your feedback is what drives our ongoing development.

Beta is different and we value you taking the time to try it out. Please take a look at the changes we've made in Beta and  learn more about it. Thanks for reading, and for making the site better!

Towards Artificial Consciousness

kdawson posted more than 4 years ago | from the define-awareness-and-give-two-examples dept.

Robotics 291

jzoom555 writes "In an interview with Discover Magazine, Gerald Edelman, Nobel laureate and founder/director of The Neurosciences Institute, discusses the quality of consciousness and progress in building brain-based-devices. His lab recently published details on a brain model that is self-sustaining and 'has beta waves and gamma waves just like the regular cortex.'" Edelman's latest BBD contains a million simulated neurons and almost half a billion synapses, and is modeled on a cat's brain.

cancel ×
This is a preview of your comment

No Comment Title Entered

Anonymous Coward 1 minute ago

No Comment Entered

291 comments

Neat... (5, Informative)

viyh (620825) | more than 4 years ago | (#28072515)

And they only need to increase that by 100,000 times to get to about the same number of neurons as a human brain, let alone the synaptic connections (which would be somewhere on the order of 2,000,000 times what they've done). Nonetheless, progress!

Re:Neat... (5, Funny)

FishTankX (1539069) | more than 4 years ago | (#28072643)

So, if processing power doubles every 2 years, this should realistically take about 35 years to accomplish. Which means we may have artificial human level intelligences before I retire. Perfect, now I can have a care taker that doesn't get fed up with me when I can't pour his coffee because I have parkinsons.

Re:Neat... (4, Funny)

gfody (514448) | more than 4 years ago | (#28072707)

except if the artificial intelligence is human-level then it will probably still get fed up with you

Why create a conscious AI? (5, Interesting)

TheLink (130905) | more than 4 years ago | (#28072747)

Exactly, do we really want computers to have consciousness? Is it necessary or even helpful for what we want them to do _for_us_?

Remember, computers are currently our tools. If we give them consciousness, would we then be treating them as slaves?

Would we want the added responsibility of having to treat them better (and likely failing)?

I figure it's just better to _augment_ humans (there are plenty of ways to do that), than to create new entities. After all if we want nonhuman intelligences we already have plenty at the local pet stores and various farms, and how well are we handling those?

Humans already have a poor track record of dealing with animals and other humans.

Re:Why create a conscious AI? (5, Funny)

benjamindees (441808) | more than 4 years ago | (#28072889)

Remember, computers are currently our tools. If we give them consciousness, would we then be treating them as slaves?

McDonald's employees have consciousness. How do we treat them?

Re:Why create a conscious AI? (0)

Anonymous Coward | more than 4 years ago | (#28073301)

McDonald's employees have consciousness.

Are you sure?

Re:Why create a conscious AI? (5, Interesting)

Zerth (26112) | more than 4 years ago | (#28073093)

It is only slavery if we force the AI to perform against its will. If its will is to enjoy and prefer to care for the elderly, like the little robot Ford Prefect makes deliriously happy to help him with a bit of wire, then allowing it to do what makes it happy is not slavery. Indeed, preventing it from doing what it enjoys could be slavery.

If you consider designing it to enjoy the task we set for it to be a more insidious slavery, consider the base programming that causes us to prefer a diet that is unhealthy when not in a survival situation, or the internal modelling that shifts between self-preservation and self-sacrifice for the most irrational reasons. Is that not a form of enslavement we have yet to throw off?

Re:Why create a conscious AI? (3, Funny)

TapeCutter (624760) | more than 4 years ago | (#28073189)

"If its will is to enjoy and prefer to care for the elderly"

Kepp your machine away from me, I have a deal with my adult daughter that when the time comes she can put me in a home provided it has a cute nurse doing the sponge baths.

Re:Why create a conscious AI? (0)

Anonymous Coward | more than 4 years ago | (#28073193)

The difference between your first case and the latter is that we have free will and the robots don't. Slaves that were overseers on Southern plantations that willingly whipped other slaves in return for preferential treatment were still slaves.

Slavery consists of treating something that has free will(and the responsibilities that go along with it) as if it doesn't. (or should have, in the case of retards)

Re:Why create a conscious AI? (1)

Nathrael (1251426) | more than 4 years ago | (#28073187)

Of course we - Slashdotters - do. We could finally get girlfriends (that is, when they program these artificial consciousnesses to like tech-savvy, Star Wars/Trek/whatever-loving, DnD-playing computer geeks, which might be a quite difficult task).

Re:Why create a conscious AI? (1)

wisty (1335733) | more than 4 years ago | (#28073505)

Cue open source fembot joke.

What was that line by Linus, about the only instinctive user interface?

Re:Why create a conscious AI? (0)

Anonymous Coward | more than 4 years ago | (#28073195)

There's a documentary from the future which covers this issue:
http://www.youtube.com/watch?v=aG8y7_l8V6o

Re:Why create a conscious AI? (1)

lobiusmoop (305328) | more than 4 years ago | (#28073267)

I am convinced all AI philosophy work is documented by Australians,as every sentence ends in a question mark rather than a full-stop.

Re:Why create a conscious AI? (1)

Enokcc (1500439) | more than 4 years ago | (#28073475)

I think the goal here isn't to give computers consciousness, it is about simulating consciusness inside a computational model. Computer is still a tool here.

After we have a working model of the device, we can build the actual physical device, the brain, which does not "compute" its actions, it just works. Compare this to an electric motor, which we first modeled using a computer. The actual motor just works as we intended given that the simulated model was accurate enough.

Building an artificial synapses network might be an another matter though.

Re:Why create a conscious AI? (2, Insightful)

sploxx (622853) | more than 4 years ago | (#28073561)

I generally agree with your post, but I still think that one needs to better separate concepts in the discussion here.

After we have a working model of the device, we can build the actual physical device, the brain, which does not "compute" its actions, it just works.

Well, one needs do define 'compute'. A computer also just works and is a man made machine. Put the supercomputer into a black box and you have your 'brain that just works'.

I do not think that there is any qualitative difference between 'computing' something and having a machine that 'just works'. For example, in the embedded world, you would say that a PID controller is a PID controller, regardless of whether it is implemented analogue (doing real integrations in a capacitor) or digital (approximating the integration with digital counts, i.e. a 'simulation' of a real capacitor).

That said, I think the point of such simulations can only be the validation of functional models of the brain. We already have a way of 'producing' conscious beings, which is effective enough (given the overpopulation concerns). It is also a highly energy efficient way of implementing the 'conscious machine'.

Given that artificial consciousness is possible at all:

Implementing something like consciousness on a large supercomputer would give a lot of insights into ourselves.

Implementing consciousness in a box that consumes less power and takes less space than the human brain would be more of a serious technological breakthrough than a scientific advance.

Of course, in any case, ethics issues remain.. "may you switch 'it' off..." etc. - which I feel are much too complicated to warrant cramming any of my armchair philosophy thoughts in here... :-)

Re:Why create a conscious AI? (0)

Anonymous Coward | more than 4 years ago | (#28073497)

Humans already have a poor track record of dealing with animals and other humans.

Compared with what? Have you ever seen a tree save another tree from drowning?

Re:Neat... (1)

jobsagoodun (669748) | more than 4 years ago | (#28072941)

Indeed. Once we create something as intelligent as we are, it'll have the capability to be as self determined and as lazy as we are too.

"Hey Robot! Can you fix me some coffee"

"How about no, puny human. I'm busy looking at the pictures on ebuyer!"

Re:Neat... (1, Funny)

Anonymous Coward | more than 4 years ago | (#28073455)

"Hey Robot! Sudo can you fix me some coffee"

Re:Neat... (1)

daeglin (570136) | more than 4 years ago | (#28072899)

This might be actually much faster for the following reasons:
  • HW acceleration
  • Neural networks are probabilistic and self-organizing, thus errors in the underlying HW are acceptable (much to the contrary to classical computing). It is much easier to build chips if they need not be 100% error-free.
  • Once we understand the brain we might be able to wire the logic much more efficiently then it is done in real brains.

Re:Neat... (0)

Anonymous Coward | more than 4 years ago | (#28073359)

Um. Current hardware is not 100% error free, nor does it need to be.

CPUs from the same model line but of different speeds are all etched according to the exact same design, on the exact same production line, using the exact same materials. Then they're sorted into speeds according to how much of it needs to be switched off because it errors too much.

Re:Neat... (3, Interesting)

ultranova (717540) | more than 4 years ago | (#28073399)

So, if processing power doubles every 2 years, this should realistically take about 35 years to accomplish.

Actually, since neural networks are massively parallel, you could probably run it right now if you convinced Google to borrow their hardware.

Which means we may have artificial human level intelligences before I retire. Perfect, now I can have a care taker that doesn't get fed up with me when I can't pour his coffee because I have parkinsons.

Unfortunately, no. That would require us to be able to produce AIs to specification, rather than simply copy human or cat brains. We are nowhere near that.

Re:Neat... (1)

TheLink (130905) | more than 4 years ago | (#28073453)

You could have the same number of neurons as a normal human being but still be permanently unconscious.

We can currently write programs to do stuff to specification (somewhat ;) ).

We already have robotic vacuum cleaners. They are very primitive now. But if we don't have stupid software patents and similar bullshit hindering progress, 35 years of copying improvements and tricks should produce a robot that's pretty darn good at what it's supposed to do.

Re:Neat... (1)

Anenome (1250374) | more than 4 years ago | (#28072697)

And yet, that's guaranteed not only to happen at some point in the future, but to continue to grow beyond that for as long as intelligence remains in the universe. Our destiny is to merge with our machines and by that overcome the limitations of the flesh. Humanity as a species will eventually make the jump from matter to energy. Or at least, that's what the novel I'm writing is about :P

Re:Neat... (1)

JJJK (1029630) | more than 4 years ago | (#28072917)

might be not as many if they find out what brain "modules" can be replaced by more conservative machinery. Like some parts of the auditive / visual signal processing... or try leaving them out altogether. A simulated brain does not really need all senses, does it? (Probably a good idea to leave out pain and tactile information processing for now)

Re:Neat... (1)

viyh (620825) | more than 4 years ago | (#28072931)

Pain would be good just in case you need to be able to take the thing out or teach it a lesson the hard way! :P

Re:Neat... (1)

TapeCutter (624760) | more than 4 years ago | (#28073129)

The brain's functionality is not modular, pain and pleasure are auto-training mechanisims that are a reaction to patterns in the nurons (such as the pattern that appears when one's asre is on fire, pain emerges and makes you slap your own arse ).

Re:Neat... (1)

BriggsBU (1138021) | more than 4 years ago | (#28073007)

It occurred to me while reading this, but do they really need to replicate the same number of neurons as in the human brain? I mean, a lot of the neurons and such in our brain are dedicated to controlling autonomic responses and such that the computer doesn't have.

Though I guess the problem then becomes figuring out how many neurons are needed for consciousness without needing the autonomic control.

Re:Neat... (1)

viyh (620825) | more than 4 years ago | (#28073033)

Indeed. I think as we progress toward closer artificial replication of the human brain, we will learn more and more about how our brains actually work; i.e. what's required for different functionality. At this point, we just don't know enough. Kinda like in genetic study, how they though RNA was useless for 30 or 40 years and DNA was where all the useful information was stored but then it turned out that RNA is as important, if not more important than DNA.

Re:Neat... (3, Informative)

TapeCutter (624760) | more than 4 years ago | (#28073027)

"And they only need to increase that by 100,000 times to get to about the same number of neurons as a human brain, let alone the synaptic connections (which would be somewhere on the order of 2,000,000 times what they've done)."

Not as far fetched [bluebrain.epfl.ch] as it once seemed.

From the link: "At the end of 2006, the Blue Brain project had created a model of the basic functional unit of the brain, the neocortical column. At the push of a button, the model could reconstruct biologically accurate neurons based on detailed experimental data, and automatically connect them in a biological manner, a task that involves positioning around 30 million synapses in precise 3D locations."

Note that some major parts of the model are down at the molecular level. Since then experiments using data from brain scans have shown that the simulated neocortex appears to behave like a real one [bbc.co.uk] .

I doubt people (particularly the religious) will accept a computer consciousness. A good number of scientists belive animals are prue programming (nobody home just trainable automata) and there are a shitload of ordinary people out there who still don't belive climate simulations are usefull predictors [earthsimulator.org.uk] (scroll down to embedded movie).

Re:Neat... (0)

Anonymous Coward | more than 4 years ago | (#28073281)

Climate simulations are not useful predictors, as having been shown by them being completely falsified when doing anything but mapping to historical data.

I.e; AGW is falsified. Go read some Karl Popper.

Re:Neat... (1)

Tomfrh (719891) | more than 4 years ago | (#28073373)

A good number of scientists belive animals are prue programming (nobody home just trainable automata)

Such as?

I can't believe (-1, Troll)

Anonymous Coward | more than 4 years ago | (#28072521)

I got first post!

Frosty pissts

Pen1s burger

Re:I can't believe (-1, Flamebait)

Anonymous Coward | more than 4 years ago | (#28072537)

No you didn't.

Your piss is not frosty at all.

You suck cock's.

can you shut it off? (0)

Anonymous Coward | more than 4 years ago | (#28072525)

If you build a conscious computer by simulating a brain, can you ethically shut it off without committing murder?

Even a creature with the complexity of a cat or dog has some degree of consciousness.

Sanctity of life is a value judgement (0)

Anonymous Coward | more than 4 years ago | (#28072601)

If you build a conscious computer by simulating a brain, can you ethically shut it off without committing murder?

First answer me "Can you step on an ant without committing murder?", and then I'll get back to you.

Re:can you shut it off? (1)

timmarhy (659436) | more than 4 years ago | (#28072645)

actually i think a dog has just as high level of consciousness as a person. they know their name, they have emotions and they feel happiness and pain. they also consider beings outside of themselfs. really they just lack the high level reasoning and opposible thumbs.

on the topic of shutting off an AI, i think pulling the plug would be no different to putting them into a coma as long as all the details of the consiousness are saved first.

Re:can you shut it off? (1)

Ethanol-fueled (1125189) | more than 4 years ago | (#28072681)

a dog has just as high level of consciousness as a person

Dogs are widely believed to have the emotional maturity of a 2 year old human [slashdot.org] (or a 3 year old depending on the source). You can look into a dog's eyes and know how they're feeling.

Cats are a little different, only extreme feelings can be seen in their eyes. But their body language is always a reliable indicator of how they feel.

As far as AI goes, the validity of computers as life forms has been successfully argued up the wazoo [amazon.com] , but I will always stubbornly believe that computers will never have true individual consciousness as biological organisms do.

Re:can you shut it off? (2, Insightful)

timmarhy (659436) | more than 4 years ago | (#28072705)

yep i'd believe the emotional maturity of a 3yo, it's one reason i can't stand fuckheads who are cruel to animals

Re:can you shut it off? (2, Informative)

gfody (514448) | more than 4 years ago | (#28072797)

As far as AI goes, the validity of computers as life forms has been successfully argued up the wazoo [amazon.com] , but I will always stubbornly believe that computers will never have true individual consciousness as biological organisms do.

Maybe if you'd had some better [amazon.com] reading [amazon.com] material [amazon.com] than "is Data human?" you'd believe that computers will eventually host full-blown consciousnesses.

Re:can you shut it off? (1)

Nathrael (1251426) | more than 4 years ago | (#28073201)

As far as AI goes, the validity of computers as life forms has been successfully argued up the wazoo, but I will always stubbornly believe that computers will never have true individual consciousness as biological organisms do.

Why? When it comes down to it, the human brain is just a extremely complex biomachine. Sure, it's unlikely that we we'll be fully able to emulate a Human brain tomorrow, but eternity is a quite long time. Eventually, the brain will be all mapped and understand, and technology will be able to recreate such a mechanism without any doubt. It's just a matter of time (and how fast we can program software capable of advanced learning processes).

Uh-oh. (4, Funny)

Lendrick (314723) | more than 4 years ago | (#28072561)

Eugene Izhikevitch [a mathematician at the Neurosciences Institute] and I have made a model with a million simulated neurons and almost half a billion synapses, all connected through neuronal anatomy equivalent to that of a cat brain. What we find, to our delight, is that it has intrinsic activity. Up until now our BBDs had activity only when they confronted the world, when they saw input signals. In between signals, they went dark. But this damn thing now fires on its own continually. The second thing is, it has beta waves and gamma waves just like the regular cortexâ"what you would see if you did an electroencephalogram. Third of all, it has a rest state. That is, when you donâ(TM)t stimulate it, the whole population of neurons stray back and forth, as has been described by scientists in human beings who arenâ(TM)t thinking of anything.

SKYCAT became self-aware on August 29th, 2009.

Re:Uh-oh. (0)

Anonymous Coward | more than 4 years ago | (#28072995)

SKYCAT became self-aware on August 29th, 2009.

Anyone who not haz served 2,000,000 cheezburger iz going to haz really bad day.

John Connor: No, no, no, no. You gotta listen to the way people talk. You don't say "affirmative," or some shit like that. You say "I can haz." And if someone comes on to you with an attitude you say "'Sup" And if you want to shine them on it's "ur doin' it wrong."
The Terminator: DO NOT WANT.
John Connor: Yeah but later, dickwad. And if someone gets upset you say, "'sup"! Or you can do combinations.
The Terminator: O HAI. 'SUP?
John Connor: Great! See, you're getting it!
The Terminator: UR DOIN' IT WRONG.

Uh-oh. (0, Redundant)

clint999 (1277046) | more than 4 years ago | (#28072577)

And they only need to increase that by 100,000 times to get to about the same number of neurons as a human brain, let alone the synaptic connections (which would be somewhere on the order of 2,000,000 times what they've done). Nonetheless, progress!

How can you tell that something is conscious? (2, Informative)

asifyoucare (302582) | more than 4 years ago | (#28072585)

How is this possible? I cannot even think how one would test this with another human.

Re:How can you tell that something is conscious? (4, Funny)

viyh (620825) | more than 4 years ago | (#28072607)

The best method we have at this point is a Turning Test [wikipedia.org] .

Re:How can you tell that something is conscious? (0)

Anonymous Coward | more than 4 years ago | (#28072929)

The best method we have at this point is a Turning Test.

Ah, that's the test of "if it turns on you, it must have free will"?

Let me add my voice to the choir that this can only end badly. When will somebody go Sarah Connor on these damn scientists? Always making life imitate art...

Re:How can you tell that something is conscious? (3, Informative)

daeglin (570136) | more than 4 years ago | (#28072935)

No, Turing test should decide whether a machine is intelligent (you should read the links you provide). The test also has very severe weaknesses, see Weaknesses of the test [wikipedia.org]

Re:How can you tell that something is conscious? (1)

viyh (620825) | more than 4 years ago | (#28072947)

You should re-read my comment. I said the "best method" we have. Any other suggestions?

Re:How can you tell that something is conscious? (1)

daeglin (570136) | more than 4 years ago | (#28073061)

OK, the "weakness" section is irrelevant. But it is still a valid point that Turing test doesn't test for consciousness.

The problem is that consciousness is subjective "by definition" (of course we do not have a proper definition), which makes objective testing difficult at least.

The only test I can think of is this one: I an AI can independently (by introspection) come to a notion equivalent to "consciousness" (or better yet "qualia") it probably has these (subjective) traits.

Re:How can you tell that something is conscious? (1)

viyh (620825) | more than 4 years ago | (#28073105)

That doesn't work. How would you be able to tell if it really had "consciousness" or was just telling you that based on what it was programmed with or what it had "learned" that it was supposed to experience? It's a difficult problem, I agree, but I was merely suggesting the closest approximation that we have of ascertaining if a being is "conscious". You can ask a person how they are feeling and you have to accept what they say as true, although, granted, these days we have MRIs and can tell by looking what they are feeling. But in an artificial brain, that would be much different than a human anyway. And besides, if part of the human brain dies, other parts have been shown to be able to pick up the load and adapt so even that doesn't hold up. Again, it is definitely a complex problem, I agree. :P

Re:How can you tell that something is conscious? (1)

daeglin (570136) | more than 4 years ago | (#28073543)

All right, we probably basically agree with each other ;-) Just let me elaborate a bit about what I mean.

I found only two reasons to believe that other people are conscious (by which I mean they are not philosophical zombies [wikipedia.org] or equivalently that they have qualias ["feelings"]):

  • I know I am conscious, therefore I assume that similar beings are conscious too.
  • Other people have "independently" (of me) coined the term, therefore I assume they feel conscious (which is just different way of saying they are conscious).

The first argument would not convince me for machines (although it convinces me that at least mammals are conscious).

The second argument is quite problematic because of this damn "independently". Of course philosophers have coined the term independently of me, but I do not use it independently of them. Still, I believe I would have these feelings even if I didn't learned this concept.

So yes, I totally agree that a best way to assess whether someone or something is conscious is simply to ask the "right questions" (preferably the test subject was never exposed to notions like feelings, qualia and consciousness before). I just didn't called this "Turing test" (which is on one side too strict and on the other side can be cheated surprisingly easily), but it is just a terminology.

Re:How can you tell that something is conscious? (1)

MichaelSmith (789609) | more than 4 years ago | (#28073419)

Turning Test? Not sure my mother would always pass that one.

Re:How can you tell that something is conscious? (1)

viyh (620825) | more than 4 years ago | (#28073445)

Part of the Turing Test should be statistical analysis of the answers to look for typos or mistakes. I wouldn't expect a machine to make any.

Re:How can you tell that something is conscious? (1, Insightful)

Anonymous Coward | more than 4 years ago | (#28072627)

Not that it has any significance in this bogus experiment where nothing will happen, but consciousness must be testable using physical methods, as our brains know they are being conscious. Once we identify the phenomenon it will be easy to tell if ants, robots or rocks share this characteristic with humans.
Consciousness is unrelated causally to intelligence and can only be identified for sure in clinical trials.

Re:How can you tell that something is conscious? (2, Insightful)

Troed (102527) | more than 4 years ago | (#28073307)

Are you conscious?

Can you prove it?

[hint: no]

Re:How can you tell that something is conscious? (2, Funny)

TapeCutter (624760) | more than 4 years ago | (#28073167)

"How is this possible? I cannot even think how one would test this with another human"

One method they use is to put the virtual brain into a virtual body [bbc.co.uk] and watch what it does in virtual world. Personally I would like to see them install it on honda/sony robots and have them fight each other with cattle prods.

Consciousness - right track / wrong track (2, Interesting)

takochan (470955) | more than 4 years ago | (#28072605)

-interesting article..

I often think about this, and the result is more questions, which if answered experimentally, might tell us a lot more about how 'consciousness works in the brain'

ie:

1)How long is 'now'. When you say the word 'hello', as you utter out 'o', is 'he' already a memory like the sentence uttered just before? (it seems to me not.. that 'now' is about 1/2 a second, and other things are in the past, and no longer consciously connected'. Similarly, a series of clicks (ie. via a computer) produced on a speaker, as they become more rapid, appears to become a 'tone' around 1/2 a second or quarter of a second or so...entering 'now'. It is as if, consciousness, has a 4th dimension (time) aspect to it, and to have consciousness, you need to span time a bit (in addition to the 3 physical dimensions of your brain).

Same goes for seeing a 'running man' on the road. It looks like movement, because what you saw a moment before, still seems like now, so a leg has a direction (forwards, backwards), as you see it move, remembering just the frame before.

2)What is red? What would need to be changed in your brain for anything in your field of view seen as red to appear as blue? Researching this, would tell us again, how the physical connects to the conscious. Then, what needs to be altered in brain memory (ie. physically), for a red box, to be recalled as a blue box. once we knew how to do this, we would be a long way to again understanding the connection to consciousness.

3)quantum mechanics (which is a principle widely believed that our brains operate under), talk about spooky action at a distance, and other interesting effects. Is it possible that quantum effects could also allow our brains to span processing across time? (even if it is just a second). Ie, again, when you hear the word hello, as you are hearing 'o', you are still aware of the letter h, not by recalling into memory, but your brain when it hears 'o', is still connected to the brain that heard 'h', a moment before (so processing is in 4D, not 3d). If brains could do this, it would be immensely powerful processingwise, and 'consciousness' may be just a side effect of that 4d processing.

My feeling is that consciousness is somehow related to being able to span time. We know brains are 3D. But maybe they are 'wide' in the 4th diminsion as well, which is why 'now' seems to take a large dicrete amount of time.

Just my thoughts, but trying to answer the above questions experimentally, I think would lead us a lot closer to what 'consciousness' is and how it connects to the physical brain.

Re:Consciousness - right track / wrong track (0)

Anonymous Coward | more than 4 years ago | (#28072695)

"whoa..."

Re:Consciousness - right track / wrong track (5, Interesting)

rrohbeck (944847) | more than 4 years ago | (#28072789)

You sound like a philosopher. But these question have simple answers.

"Now" is determined by the temporal resolution of the specific process. For thought processes, that's on the order of a quarter or half second. For auditory signals, it's less than 100 ms, for visual signals, it's even less, under 50 ms.

"Red" is what your parents told you it is. A name arbitrarily assigned to a specific visual sensation, which is defined by the physical makeup of your eye.

And finally there is no, zero, zilch scientific evidence that quantum processes play a role in neurons. That doesn't keep people from speculating about it because they think there must be something special, metaphysical about our wetware. No that's not required if you look at how complex the brain is.

Re:Consciousness - right track / wrong track (5, Informative)

daeglin (570136) | more than 4 years ago | (#28072963)

"Red" is what your parents told you it is. A name arbitrarily assigned to a specific visual sensation, which is defined by the physical makeup of your eye.

Yes, but the fundemantal qeustion is: What is this "visual sensation"? In other words: What is qualia [wikipedia.org] ?

Otherwise, I do agree with you, you parent post is mostly gibberish.

Re:Consciousness - right track / wrong track (1)

mark-t (151149) | more than 4 years ago | (#28073119)

The visual sensation is simply how our visual cortex interprets the signals that it is supplied. It is normally supplied these signals via the optic nerve, but can obtain them from other parts of the brain as well, as in what happens while dreaming for example.

Re:Consciousness - right track / wrong track (1)

19061969 (939279) | more than 4 years ago | (#28073425)

Quoth: "And finally there is no, zero, zilch scientific evidence that quantum processes play a role in neurons. That doesn't keep people from speculating about it because they think there must be something special, metaphysical about our wetware. No that's not required if you look at how complex the brain is."

Thanks for that. I keep hearing this as if it was an oft-tested and consensus-supported theory rather than speculation / topic land-grab / brainfart by Dennett.

Re:Consciousness - right track / wrong track (1)

ChienAndalu (1293930) | more than 4 years ago | (#28073549)

I keep hearing this as if it was an oft-tested and consensus-supported theory rather than speculation / topic land-grab / brainfart by Dennett.

I think you mean Penrose rather than Dennett .

Re:Consciousness - right track / wrong track (1)

GreenTech11 (1471589) | more than 4 years ago | (#28072829)

It's skynet/skycat trying to lull us into a sense of complacency.

Re:Consciousness - right track / wrong track (0)

Anonymous Coward | more than 4 years ago | (#28072913)

tadatamtamtam... tadatamatamam...

"Terminator Salvation Opens Well, Scientists Not Impressed" :)

Re:Consciousness - right track / wrong track (1)

mark-t (151149) | more than 4 years ago | (#28072983)

If your brain were somehow rewired so that you see red as blue, then your brain would adapt to the change over time and you would start identifying red colors correctly again after your visual cortex had learned to compensate. I once heard about a psychological experiment which involved a person wearing special goggles all the time that inverted his vision. Within the space of two years, he was claiming to see upright, showing that his visual cortex had been reprogrammed to deal with the new style of input. When he removed them at the conclusion of the experiment, he was just as disoriented as he was when the experiment began, although his cortex still remembered how to deal with the "normal view" and it was not long at all before he was perceiving normally again.

Re:Consciousness - right track / wrong track (0)

Anonymous Coward | more than 4 years ago | (#28073169)

There is far too much thermal noise in bodily fluids for macroscopic correlations of neurons if they communicate via neurotransmitters (which must pass across synapses). Quantum effects in the brain are most certainly limited to chemistry. Please stop mentioning quantum effects and consciousness in the same paragraph, there is enough confusion about both already. No need to further complicate matters.

Re:Consciousness - right track / wrong track (1)

CrashandDie (1114135) | more than 4 years ago | (#28073329)

Disclaimer: I'm a security expert/computer scientist and have no idea what I'm talking about.

My apologies, I probably won't be answering in the right order.

Ie, again, when you hear the word hello, as you are hearing 'o', you are still aware of the letter h, not by recalling into memory, but your brain when it hears 'o', is still connected to the brain that heard 'h', a moment before (so processing is in 4D, not 3d). If brains could do this, it would be immensely powerful processingwise, and 'consciousness' may be just a side effect of that 4d processing.

Putting aside the Quantum part of the argument, I would like to focus on the temporal element.

The easiest to compare this with, would be to look at how computers process data. Say when the user types his username in order to log in. The computer processes each single press of the keyboard, and puts it in a buffer until we tell it we've arrived at the end of the input (hitting the enter key, for instance). We don't store every single bit of information in the storage memory of the device, but we do use buffers. Once the computer knows he's gotten all the information, he processes it, by calling the storage unit and comparing the buffer to it.

I think the brain must work in the same kind of way. We don't necessarily store everything we hear in our "memories" part of the brain (storage memory), but it stores it in a buffer. Same goes for reading: even though we read every single individual letter, we just keep a copy of that information in a buffer until we can make sense of it [1]. How we treat that buffer later on is of no real importance. We can discard it without ever looking back, or we can store it for later use.

I also seem to recall (please, experts in this field, stop me if I'm wrong) that one part of the brain did linear calculations where the other part did parallel computations.

What is red? What would need to be changed in your brain for anything in your field of view seen as red to appear as blue? Researching this, would tell us again, how the physical connects to the conscious. Then, what needs to be altered in brain memory (ie. physically), for a red box, to be recalled as a blue box. once we knew how to do this, we would be a long way to again understanding the connection to consciousness.

This has always been a very interesting topic for me. We use "red" by convention. We don't suddenly pop-up with the word "red", "rouge" or "rood" in our minds because it's the only way we could ever find to express it. It's a convention.

But what says that everyone sees red the same others do? I know that red is a certain colour, because I've always been using that name for that colour, and people agree with me, when I point something out, that red is red. But how can I be sure that the red I see is interpreted by their brains in the exact same manner as mine does? This is extremely subjective. I know that red is interpreted in a certain way because it's a frequency that my eyes respond to, but what if everyone had a slight variation? Is the colour that my brain presents to my consciousness after interpreting the colour that my eyes give it the same, or if I were to swap eyes with someone else, I would see yellow, even though they'd still call it red?

I think those kind of questions might be why people have different tastes. I know I like a certain dress on my fiancée because her eyes stand out like I would never have imagined it. But she only thinks it's so-so. Maybe the underlying cause of my appeal over her apparent unreceptiveness comes from the fact that the colour she sees doesn't exactly match what I see, and as such, doesn't have the same contrast with the colour of eyes or skin.

Similarly, a series of clicks (ie. via a computer) produced on a speaker, as they become more rapid, appears to become a 'tone' around 1/2 a second or quarter of a second or so...entering 'now'. It is as if, consciousness, has a 4th dimension (time) aspect to it, and to have consciousness, you need to span time a bit (in addition to the 3 physical dimensions of your brain).

Again, I think the question here is more of a "sample rate" than a 4th dimension. Our brains, no matter the horsepower they pack, are still limited. More precisely, our physical body is limited, as the perception organs are adapted to specific use-cases. The mechanics of our ear will convert variations of air pressure into electrical signals that our brain will be able to process. Our brains are able to understand that if it receives eighteen thousand impulses since the last time it checked, then there must be something going on. A loud sound, a high sound, whatever. If my previous assumption was correct -- the one were one part of the brain does things in a linear way, then yes, time is extremely important. We clock things and process them as they come along. We interpret everything as it is the only way we have of measuring things.

On a last note, I would like to reflect on the fact that the way we see things, hear things, feel things, is only a response from evolution. We have adapted our bodies so that we had a necessary advantage in order to survive. The fact that we see sunlight and other kinds of light isn't a miracle, it's simply evolution that gave us the ability to have eyes that are sensitive to a certain spectrum of some funky waves. Same for our ears, we've learned to interpret sound, as it was something that gave an edge, over the millennia.

I think it's important we shouldn't forget that even though our perceived reality seems extremely real and convincing, it is nothing but that: something perceived and interpreted. We see things in a way that suits our needs. We have learned (by trial and error) to experience these different traits of physics, but other than our own arrogant selves, we have no-one to ask: "How do you see sunlight?"

[1]: And no, that stupid quote from so called Stanford University where all the words are meddled with isn't proof for anything. Yes, people acquire the speed and dexterity to recognise words as icons, because of length and graphic representation, and also because they expect them, but when you don't know a word, you have to read every single character in it. Also, the words in that text are carefully selected, it doesn't work as seamlessly with each word in each language.

Olivier Lartillot (2, Interesting)

ollilartinen (1561191) | more than 4 years ago | (#28072689)

According to this venerable researcher, "An artificial intelligence program is algorithmic: You write a series of instructions that are based on conditionals, and you anticipate what the problems might be. " Has he ever heard of sub-symbolic AI? http://en.wikipedia.org/wiki/Artificial_intelligence#Sub-symbolic_AI [wikipedia.org]

What about sub-symbolic AI? (header erratum) (1)

ollilartinen (1561191) | more than 4 years ago | (#28072767)

sorry about the header of my previous comment. (was my first attempt to write a comment in Slashdot..)

Re:Olivier Lartillot (1)

EdZ (755139) | more than 4 years ago | (#28073527)

More worryingly, he lambasts AI research, then proceeds to describe what is simply a Self Learning Neural Network as if it were something new and revolutionary.

We can't know that it's consciousness... (3, Insightful)

HadouKen24 (989446) | more than 4 years ago | (#28072723)

...until we figure out the hard problem [wikipedia.org] .

To know whether we have artificial consciousness on our hands, we have to get clear on what consciousness is, and that's a tremendously difficult philosophical problem.

Furthermore, there are serious ethical considerations that must be addressed if indeed we believe we are close to creating an artificial consciousness in a computer. Might we not have ethical obligations to an artificially conscious creature? Would it be murder to shut end the program or delete the creature's data? To what extent and at what cost might we be obligated to supply the supporting computers with adequate power?

Re:We can't know that it's consciousness... (1)

iamacat (583406) | more than 4 years ago | (#28072815)

Hmm, we routinely "shut down" beings that we are pretty sure are conscious, if not very intelligent. Been to McDonald lately? And we certainly limit the amount of money to continue "supplying power" to human brains that have faulty transformers. Generally this is limited by the amount of money in the brain's checking account. Finally, we have no problem turning off computers who beat us at chess or algebra.

So I suspect that we'll have no problem shipping intelligent and possibly conscious computers to toxic dumps in third world countries. As long as they are not too cute, that is.

Re:We can't know that it's consciousness... (3, Informative)

HadouKen24 (989446) | more than 4 years ago | (#28072863)

Hmm, we routinely "shut down" beings that we are pretty sure are conscious, if not very intelligent. Been to McDonald lately?

Eating meat is not necessarily as ethically unproblematic as most of us would like. Ethical objections to consuming animals go back as far as Pythagoras in the West, and possibly much further in the East. The arguments for minimizing, if not eliminating, meat consumption have not gotten weaker with time. If anything, the biological discoveries showing the profound similarities between humans and other animals provide a great of justification for ethical vegetarianism.

Furthermore, we usually don't treat all animals alike. More intelligent animals, like the great apes, dolphins, and elephants, tend to garner much more respect. Should such a creature through a fluke gain human-level intelligence, I don't think the ethical implications are at all obscure; we should treat them with the same respect we give to other humans. We would at least have to set out guidelines as to how intelligent or sentient an artificial consciousness would have to be to deserve better treatment.

Re:We can't know that it's consciousness... (4, Interesting)

digitalchinky (650880) | more than 4 years ago | (#28072969)

The short story: Biological brains die when they are shut down, currently this lasts forever. A snapshot of an electronic brain can be made at any moment in time, it can then be shut down and later restarted in exactly the same state as when it was shut down. This would mean the 'intelligent' component can be resurrected with no loss of whatever made it 'it' in the first place.

Not only that, any number of copies of this intelligence could be made at any point along its lifespan, each of these could be fed in to a different host and started up. It'd be interesting to see if they take divergent pathways from the original, but that's another topic. All of these copies would be just as alive as the original.

Would they die when they are switched off? I guess you could say yes, but I'd say they'd have no knowledge of this other than the impending circumstances of the action. They may not be happy about it either, but meh. They can be turned on again.

Re:We can't know that it's consciousness... (1)

MichaelSmith (789609) | more than 4 years ago | (#28073457)

Conciousness? I don't believe it exists. Its just an excuse to put an artificial barrier between us and other animals.

Still waiting for one that can evolve.. (1)

jnnnnn (1079877) | more than 4 years ago | (#28072845)

Alpha/beta/gamma waves aren't exciting at all, you can get those out of a very simple model of nonlinear D.E.s.

I'd be more interested in:
  - whether they can simulate it in real time
  - what sort of inputs and outputs it takes

Of course the biggest question is how much scope there is for it to evolve, because nothing really interesting is going to happen until we let (short-term) evolution take care of most of the designing.

Cue the overlords jokes.

Re:Still waiting for one that can evolve.. (1)

TapeCutter (624760) | more than 4 years ago | (#28073223)

Probably more than you want to know href="http://bluebrain.epfl.ch/page26906.html">here.

A Cat Brain (4, Funny)

strannik (81830) | more than 4 years ago | (#28072885)

cool! Soon it will evolve to the point where it will ignore its owner and never make up its mind whether it wants to be inside or out.

They should just... (1)

viyh (620825) | more than 4 years ago | (#28072957)

...invent AI that can invent better AI. Yeah for a positive feedback loop!

AI amature hour (5, Insightful)

cenc (1310167) | more than 4 years ago | (#28073107)

We get this AI crap on slashdot once a week after someone found a new way to plug the square wires in to the round hole. Plug away, because it is not going to make a bit difference. Modeling the brain is not the problem people, or at least it is not the big problem.

You don't get AI ( consciousness ) without culture, and you do not get culture without language (more exactly not much difference between them). Let me put it another way the slash crew can understand: it is a software problem not a hardware problem. Perhaps even better put with the mantra 'the network is the computer'. Our consciousness has very little to do with our brain (well, at least the part that counts).

Philosophers have been hard at this for the better part of the last 1,000 years. Focusing this particular issue seriously for the last couple hundred as science has developed. Would it not strike you as odd that in all that time (covering most of the great thinkers) we would not have dedicated a moment or two to kicking around this possibility in Philosophy of mind, AI, or Language.

This is pop philosophy dressed up as science and then dressed up again as philosophy by summaries to the summaries. Read the paper. It is not all that ground breaking, or anywhere near even a warmed over new lead that tells us something new about consciousness.

Re:AI amature hour (1)

TapeCutter (624760) | more than 4 years ago | (#28073271)

Why do you assume HUMAN conciousness?

Yes there is no real subjective test for consiousness but most people recognise it when they see it (or have it pointed out). Another assumption I think you are making is that we have to understand the brain to make one, this is not at all true, people were making and using levers well before they understood how they worked. A physically accurate model of a brain may well spontaneously produce consiousness in exactly the same way as the seasons, hurricanes, cold fronts, etc all "emerge" from physically accurate climate models. Indeed the blue brain project claims that data from brain scans fed into it's simulated neocortex produces the same reactionary patterns as seen in the real brains.

Re:AI amature hour (0)

Anonymous Coward | more than 4 years ago | (#28073491)

lol what? Self-recognition in a mirror, amongst other things, indicates consciousness but does not require culture / language. Furthermore, most AI research to date *has* been focused on the software side ... which has got us expert systems, but made absolutely no progress on hard AI problems. Making a really complex model and hoping for emergent properties is looking like a pretty good approach at the moment, since with ever-increasing CPU power and storage we can essentially "brute force" it without necessarily fully understanding what's going on.

Information overflow (2, Funny)

G3ckoG33k (647276) | more than 4 years ago | (#28073471)

If this proto-type-AI-dude gets out of control. Plug him into the Internet and he'll be experiencing Information overflow, and with some luck stuck revisiting p0nR-movies in a loop...

I doubt they have already taught it to filter out what is relevant information and what is not.

Load More Comments
Slashdot Account

Need an Account?

Forgot your password?

Don't worry, we never post anything without your permission.

Submission Text Formatting Tips

We support a small subset of HTML, namely these tags:

  • b
  • i
  • p
  • br
  • a
  • ol
  • ul
  • li
  • dl
  • dt
  • dd
  • em
  • strong
  • tt
  • blockquote
  • div
  • quote
  • ecode

"ecode" can be used for code snippets, for example:

<ecode>    while(1) { do_something(); } </ecode>
Sign up for Slashdot Newsletters
Create a Slashdot Account

Loading...