Beta

×

Welcome to the Slashdot Beta site -- learn more here. Use the link in the footer or click here to return to the Classic version of Slashdot.

Thank you!

Before you choose to head back to the Classic look of the site, we'd appreciate it if you share your thoughts on the Beta; your feedback is what drives our ongoing development.

Beta is different and we value you taking the time to try it out. Please take a look at the changes we've made in Beta and  learn more about it. Thanks for reading, and for making the site better!

Douglas Hofstadter Looks At the Future

timothy posted more than 6 years ago | from the postponing-the-singularity dept.

Books 387

An anonymous reader writes with a link to this "detailed and fascinating interview with Douglas Hofstadter (of Gödel Escher Bach fame) about his latest book, science fiction, Kurzweil's singularity and more ... Apparently this leading cognitive researcher wouldn't want to live in a world with AI, since 'Such a world would be too alien for me. I prefer living in a world where computers are still very very stupid.' He also wouldn't want to be around if Kurzweil's ideas come to pass, since he thinks 'it certainly would spell the end of human life.'"

cancel ×

387 comments

Sorry! There are no comments related to the filter you selected.

SLASHDOT SUX0RZ (-1, Troll)

Anonymous Coward | more than 6 years ago | (#23771035)

_0_
\''\
'=o='
.|!|
.| |
Douglas Hofstadter looks at goatse [goatse.ch]

Singularity is naive (4, Interesting)

nuzak (959558) | more than 6 years ago | (#23771065)

Is it just me or does the Singularity smack of dumb extrapolation to me? "Progress is accelerating by X, ergo it will always accelerate by X".

I mean, if I ordered a burrito yesterday, and my neighbor ordered one today, and his two friends ordered one the next day, does that mean in 40 more days, all one trillion people on earth will have had one?

Re:Singularity is naive (3, Interesting)

servognome (738846) | more than 6 years ago | (#23771227)

I don't think it's necessarily dumb extrapolation, but I do think not all the variables are included.
AI's exist in a perfectly designed environment, they have humans feed them power & data and all they need to do is process. At some point computers will need to interact with the environment, it is then that everything will slow down, and probably take a step backwards.
Massive amounts of processing power will have to get reassigned to tasks currently taken for granted, like acquiring data. Imagine the size of big blue if it had to actually see the board and physically move the pieces.

Re:Singularity is naive (2, Interesting)

Pvt. Cthulhu (990218) | more than 6 years ago | (#23771829)

the Singularity is not just about improving computers' metacognition until they become aware, but also augmenting ourselves. We can be the self-improving 'artificial' intelligences. And proccessing power need not be purely electrical. Mechanical computers used to be the norm, is what they do not also information processing? And what of 'natural' processors? I imagine if you engineered a brain-like neural mass of synthetic cells, it could play a mean game of chess. Replace the executive system of a monkeys brain with that, and you have a monkey that could beat Kasparov just as easily as Deep Blue, and it could move the pieces itself.

Re:Singularity is naive (5, Informative)

bunratty (545641) | more than 6 years ago | (#23771333)

You're not understanding what the singularity is about. What you're describing is a dumb extrapolation. The singularity, in contrast, is the idea that once we develop artificial intelligence that is as smart as the smartest scientists, there is the possibility that the AI could design an improved (i.e. smarter, faster) version of itself. Then that version could design a yet more improved version, even more quickly, and so on. That will mean the rate of scientific progress could be faster than humans are capable of, and we could find ourselves surrounded by technology we do not understand, or perhaps we cannot possibly understand. The idea behind the singularity is feedback, such as the recursion that can be created by the Y combinator in your sig.

Re:Singularity is naive (3, Insightful)

smallfries (601545) | more than 6 years ago | (#23771597)

The interview contains one of the best descriptions of the Singularity religion that I've heard:

I think Ray Kurzweil is terrified by his own mortality and deeply longs to avoid death. I understand this obsession of his and am even somehow touched by its ferocious intensity, but I think it badly distorts his vision. As I see it, Kurzweil's desperate hopes seriously cloud his scientific objectivity.

Re:Singularity is naive (3, Insightful)

bunratty (545641) | more than 6 years ago | (#23771723)

I believe that something like the singularity will come to pass, in the sense that super-smart machines will quickly develop. On the other hand, the whole idea of copying human brains just strikes me as silly. I'm really not sure what the interaction between humans and super-smart machines will be. That's one of the key points of the singularity; things will change so much so rapidly that we cannot predict what will happen.

Re:Singularity is naive (1)

flnca (1022891) | more than 6 years ago | (#23771649)

Hey did you read the Culture series of science-fiction novels by Iain M. Banks? :-)

I think the concept of self-evolving AI is very awesome! :-)

Writing a mind program is very interesting... if we understand how the human mind works in abstract terms, then we should be able design AIs. I'm baffled at the apparent stagnation in AI development. Weren't we supposed to have this already in the 80ies? ;-)

Re:Singularity is naive (5, Insightful)

magisterx (865326) | more than 6 years ago | (#23772119)

Just to clarify this excellent post slightly, the concept of a singularity does not entail AI per se. It requires an intelligence capable of enhancing itself in a recursive fashion, but this could in principle be achieved in a number of ways. Genetic engineering which then permits the development of better genetic engineering, or the direct merging of biological and computer components in a fashion which permits developing better mergers, or in principle taken to the extreme even simply ever better tools for the use in developing technology to make better tools yet.

If a singularity does occur, it will likely emerge from multiple paths at once.

Re:Singularity is naive (4, Funny)

William Baric (256345) | more than 6 years ago | (#23772233)

I am already surrounded by technology I do not understand. What would be the difference for me?

Re:Singularity is naive (0)

Anonymous Coward | more than 6 years ago | (#23772235)

the possibility that the AI could design an improved (i.e. smarter, faster) version of itself. Then that version could design a yet more improved version, even more quickly, and so on.
So, if the were the case. Then all that has been achieved is a device which makes itself over and over again, faster and better at making itself.

It would be like an over popluation experiment.

It's even funnier (3, Interesting)

Moraelin (679338) | more than 6 years ago | (#23771525)

Actually, even if it kept accelerating, singularities (as some fancy world for when you divide by zero, or otherwise your model breaks down) so far never created some utopia.

The last one we had was the Great Depression. The irony of it was that it was the mother of all crises of _overproduction_. Humanity, or at least the West, was finally at the point where we could produce far more than anyone needed.

So much that the old-style laissez-faire free-market-automatically-fixes-everything capitalism model pretty much just broke down. There just was no solution to how much a country should produce. Hence my calling it a singularity.

By any kind of optimistic logic, it should have been the land of milk and honey. It was actually _the_ greatest economic collapse in known history, and produced very much misery and poverty.

And the funny thing is, the result was... well, that we learned to tweak the old model and produce less. We still go to work daily, and a lot of companies still want overtime, and a whole bunch of people still are dirt-poor. We just divert more and more of that work into marketing, services and government spending. It's a better life than the downwards spiral of the 19'th century, no doubt. But basically no miracle has happened, and no utopia has resulted. The improvement for the average citizen was incremental, not some revolution.

That was actually one of the least destructive "singularities". Previous ones produced stuff like, for example, the two world wars, as the death throes of old-style colonialism. When the model based on just keeping expanding into new territories and markets reached the end, we just went at each other's throats instead. A somewhat similar "singularity" arguably helped the Roman Empire collapse, and ushered in a collapse of trade and return to barbarism. The death throes of feudalism created a very bloody wave of revolutions.

All the way back to the border between Bronze Age and Iron Age in Europe, where... well, we don't know exactly what happened there, but whole civilizations were displaced or enslaved, whole cities were razed, and Europe-wide trade just collapsed. Ancient Greece for example, although most people just think of it as a continuous "Greece", had a collapse of the Mycenaean civilization and Achaean language it had before, and after some 300 years of the Greek Dark Ages, suddenly almost everyone there speaks Dorian instead. The Greeks and Greek language of Homer, are not the same as those of Pericles. (An Achaean League was formed much later, but apparently had not much to do with the original Achaeans.) And, look, they displaced the Ionians too in their way.

We recovered after each of them, no doubt, but basically the key word is: recovered. It never created some utopian/transcendence golden age.

So, well, _if_ our technology model ends up dividing by zero, I'd expect the same to happen. There'll be much misery and pain, we'll _probably_ recover after a while, and life will go on.

Re:It's even funnier (2, Interesting)

javilon (99157) | more than 6 years ago | (#23772293)

Bring forward in time one of those Acheans to our world and ask him what he sees. He will talk about a golden age of culture and science and health and physical comfort. He won't understand what goes around him most of the time. This is what the singularities you mention brought to this world. The same probably goes for whatever is there lying in the future for us.... may be traumatic, but it will take us forward into an amazing world.

Re:Singularity is naive (4, Funny)

pitchpipe (708843) | more than 6 years ago | (#23771555)

Ho ho ho, those silly AI researchers. Anyone with a brain (not their AI haha) knows that evolution has produced the pinnacle of intelligence with it's intelligent design. Along with the best flying machines, land traveling machines, ocean going machines, chess players, mining machines, war machines, space faring machines, etc, etc, etc. Will they never stop trying to best evolution only to be shown up time and again?

Re:Singularity is naive (1)

elguillelmo (1242866) | more than 6 years ago | (#23771647)

FTA: "The Singularity" also called by some "The Rapture of the Nerds"
Very funny indeed!

Re:Singularity is naive (2, Insightful)

nbates (1049990) | more than 6 years ago | (#23771795)

I think that the reason the extrapolation is not that naive is that there is already existing intelligence (not just us, but also many species), so saying "one day we'll develop an artificial intelligence" is just saying one day we'll reproduce what is already existing.

If you use the Copernican principle (i.e. we are not special) it is easy to assume we, as species, are not specially intelligent nor specially stupid. So the statement that there could be AI more intelligent than us is not that hard to believe.

All this, of course, assuming you don't believe in the soul, god, ghosts and those things.

Re:Singularity is naive (2, Insightful)

flnca (1022891) | more than 6 years ago | (#23771949)

The scientists have not produced a viable AI so far, because they focus on the brain rather than on the mind. Brain function is poorly understood, as brain scientists often admit, and hence, there's no way to deduct an AI from brain function. The right thing to do would be to focus on abstract things, namely the human mind itself, as it is understood by psychology, perhaps. Even spirtuality can help. If god existed, how would s/he think? What would a ghost be like? What is the soul? What does our soul feel? These things are those that are the key to artificial intelligence. Not functional elements of a device we don't fully understand.

Re:Singularity is naive (2, Insightful)

nuzak (959558) | more than 6 years ago | (#23772369)

Oh I fully believe that one day we'll create a machine smarter than us. And that eventually it will be able to create a machine smarter than it. I do disagree with the automatic assumption that it'll necessarily take a shorter cycle each iteration.

Usually the "singularity" is illustrated by some graph going vertical, where I can only assume that X=Time and Y="Awesomeness". The fact that I didn't commute to work on a flying car makes me a bit skeptical.

Re:Singularity is naive (1)

Hamilton Lovecraft (993413) | more than 6 years ago | (#23772319)

Yes. The first bit of a sigmoid growth function looks a lot like an exponential growth function. In the real world, when you have exponential growth in anything, you eventually get a sigmoid [wikipedia.org] if you're lucky and a crash if not.

Hail to the robots (4, Insightful)

oever (233119) | more than 6 years ago | (#23771085)

Perhaps Hofstadter has no need for AI or robots, but I would love to see robots reach our level of thinking while I'm living. Work on AI shows us how we think and that is very fascinating. The rise of the robots will be *the* big event in our lives.

Re:Hail to the robots (2, Funny)

Anonymous Coward | more than 6 years ago | (#23771179)

I, for one, welcome... oh nevermind.

Re:Hail to the robots (1)

zappepcs (820751) | more than 6 years ago | (#23771667)

You might as well do it... at the rate that AI development is proceeding, the earth is more likely to be hit by an life ending asteroid than be overtaken by AI lifeforms unless they show up in spaceships and demand to see McGyver.

Re:Hail to the robots (3, Interesting)

Penguinisto (415985) | more than 6 years ago | (#23771305)

...as long as they don't reach our level of emotional frailties, or reach conclusions that are detrimental to continued human existence.



I know, I know... Asimov's laws, etc etc. But... for a being to be sentient and at the same time reach the same level of thinking that we enjoy, you must given them the freedom to think, without any restrictions... as humans (ostensibly) do. This requires a level of both bravery and of careful planning that is far greater than we as humans are capable of today.


I'm not predicting some sort of evolutionary re-match of Cro-Magnon v. Neanderthal (where this time the robots are the new Cro-Magnon), but it does require a lot of careful thought, in every conceivable (and non-conceivable) direction. When it comes to building anything complex, it's always the things you didn't think of (or couldn't conceivably think of given the level of technology you had when designing) that come back to bite you in the arse (see also every great engineering disaster since the dawn of history).


Best bet would be to --if ever possible-- give said robot the tools to be sentient, but don't even think of giving them any power to actually do more than talk (verbal soundwaves, not data distribution) and think.


It reminds me of an old short story, where a highly-advanced future human race finally created a sentient device out of massive resources, linked from across every corner of humanity. They asked it one question to test it: "Is there a God?" The computer replied: "There is... now."

/P

Asimov's laws are a crock (1)

infonography (566403) | more than 6 years ago | (#23771535)

For anything to understand and identify what constitutes a human life and what would threaten it requires a level of sophistication equivalent to that of a human. The three laws are a fairy tale and can't be encoded. To be able to discern the general shape of such a creature, Apes would fit the category as well. However there is more forms for intelligence besides human on this planet. Consider the various other members of the animal kingdom. Lack of verbal skills do not mean they are simply flesh machines. The likely early models for AI would be drawn from the modeling of animals and their behaviors.

I am going to point to a few books and leave it at that.

the Matrix/Animatrix cycle

Dune (look into the back story on why they didn't like thinking machines)

Blood, the last vampire (night of the beasts) by Mamoru Oshii (of Ghost in the Shell fame) and Ghost in the Shell as well.

Re:Hail to the robots (1)

amyhughes (569088) | more than 6 years ago | (#23771637)

Best bet would be to --if ever possible-- give said robot the tools to be sentient, but don't even think of giving them any power to actually do more than talk (verbal soundwaves, not data distribution) and think.

And if that's a good idea, you can bet there will be some who will not do it that way, precisely because not doing it that way is a bad idea, and some will not do it that way because, of course, they're smarter than everyone else.

I don't see technology rendering evil or ego obsolete. I see it making evil and ego more efficient and dangerous.

Re:Hail to the robots (1)

flnca (1022891) | more than 6 years ago | (#23772095)

The old short story you mentioned was written by Fredric Brown. :-)

If AI development focuses on abstract concepts, like the mind, then it can be written like a normal software application. It would have the speed of the computer system it runs on. If such an AI was capable of learning, it would quickly outperform humans, perhaps within minutes. (Did you see the movie Colossus? When the American and Russian machine begin to synchronize to each other, develop their own common language and soon outsmart their creators?)

Dystropies of smart computers often involve a level of thinking that is equal to that of humans (because the creators of these stories were human). Hence, such stories cannot predict the mind of a true AI. If it has the capability to evolve, it might evolve into something we could barely understand.

But I wouldn't think of it as a horror scenario. Rather, such a system could be a great benefit to mankind, a true problem solver. (unlike as written by Douglas Adams, Deep Thought would not deliver an answer like "42". ;-) )

Perhaps we would have less work to do, but we could enjoy infinitely more freedom and infinite resources. (just like the Culture from the novels of Iain M. Banks)

Re:Hail to the robots (1)

Penguinisto (415985) | more than 6 years ago | (#23772311)

(Did you see the movie Colossus? When the American and Russian machine begin to synchronize to each other, develop their own common language and soon outsmart their creators?)



Oh hell yes! (I'm a bit of a b-grade sci-fi movie freak). IIRC, the combined machinery simply told humans that from now on they'd be directed and controlled according to common needs, and used the threat of nuclear warfare (both computers controlled their respective country's ICBM fleet) to back it up, correct?


I agree that there is equal (perhaps greater?) potential for good to come of it than evil, but having seen how most envelope-pushing engineering endeavors have gone, I wouldn't bet the farm on it :) (then again, we might get lucky.)


OTOH, what's to say that an equally third outcome - apathy - might happen? Say, the AI decides to get the hell off of this rock and go find entities that are more mentally capable (and less dirty, ugly, whiny, sloppy, etc) than the humans who built it... :)

/P

Re:Hail to the robots (1)

BaileDelPepino (1040548) | more than 6 years ago | (#23771429)

Perhaps Hofstadter has no need for AI or robots, but I would love to see robots reach our level of thinking while I'm living. Work on AI shows us how we think and that is very fascinating. The rise of the robots will be *the* big event in our lives.
You may be right in that it would be THE big event in our lives, but I'm with Hofstadter on this one: I won't welcome the event. I find your opinion interesting, though, because I assume that since you're posting on /., you've seen at least one of the Terminator movies (what self-respecting geek hasn't seen T2?). With the number of bugs we tend to have in our programs*, I'm surprised you think that developing true AI would not eventually lead to our very own Skynet fiasco. What, then, do you foresee as the result of developing true AI, and what would you want from it? *a bit of a double entendre: interpret as "bugs in our brain programming" or "bugs in our software programming" as you will. :-)

Re:Hail to the robots (1)

ady1 (873490) | more than 6 years ago | (#23771467)

This question has been bothering me for a time. What if AI can reach the level of human intelligence and is human intelligence even a static target for it to achieve? We have evolved from very stupid creatures to our present state and I imagine that we continue to do so, thus making our brains more sophisticated and us, more intelligent. Just look at the people a century ago and todays' modern society. Even a child today is more mentally capable than an adult was 50 years ago.

Re:Hail to the robots (2, Funny)

maxume (22995) | more than 6 years ago | (#23771619)

That's because your're drunk.

Or would you like to show us all a group of children that could produce an atomic weapon given a few years and a few billion dollars?

Re:Hail to the robots (1)

tzot (834456) | more than 6 years ago | (#23772183)

Or would you like to show us all a group of children that could produce an atomic weapon given a few years and a few billion dollars?
Here. Take my two sons, give them a few billion dollars, and tell them to NOT destroy the planet. I'm betting it will take them a couple of months.

Joking aside, your example was a little unlucky (unless I misunderstood it): children are very, very intelligent these days; even if not intelligent, they have more knowledge resources readily available to them than most of humanity in its history. And atomic weapons are not exactly "rocket science". Learn about uranium, buy two large enough lumps, throw them at each other.

I'd also like to clarify here that my argument is not pro the GP post in any way.

Re:Hail to the robots (1)

the_humeister (922869) | more than 6 years ago | (#23771673)

Uhm, 50 years ago we already discovered nuclear weapons, discovered relativity, cars, airplanes, etc. Children now are no more or less mentally capable than in the past. It's just that we have more information about the world around us.

Re:Hail to the robots (1)

Prep_Styles (564065) | more than 6 years ago | (#23771699)

I think your right to assume that human intelligence will continue to expand. However if we were somehow able to provide AI with our capability for expansion would they not then be able to expand right along with us? Or perhaps even differently or faster ?

Re:Hail to the robots (1)

safXmal (929533) | more than 6 years ago | (#23772069)

I mostly agree with you. The human mind will grow together with AI. Like how Wikipedia gives the common person more knowledge than the greatest humanist 50 years ago, new developments in AI will be used to advance the human mind.

Neither is it necessary for us to understand the complete human mind to expand its capacities. Just as a simple pole enables us to Jump a 15 foot high fence crude electronics could be used to improve our brain.

Ps; please forgive my english, it`s not my first language

Re:Hail to the robots (1)

videoBuff (1043512) | more than 6 years ago | (#23771493)

Humanity, whatever that means, improves along an exponential curve. We are always at the knee of the curve.

Do humans become robots because they use contact lenses, laser surgeries or artificial limbs? DVDs, powerful PCs, Internet searches are things that were unimaginable even couple of decades ago. Soon people will figure out how to interface to a computer without keyboards. Would we be robots then? My point is that what is considered as "human" will change.

There will be purists, few and far between, who may not use "new fangled" inventions. But rest of humanity will be swept up in the wave and will never really question what it is to be a human. Most people even now conveniently leave those things to religion and worry more about what to do for coming weekend.

OT (-1, Troll)

Cajun Hell (725246) | more than 6 years ago | (#23771495)

PGP/GPG == DRM
Oh? Go on.

Re:OT (1)

countSudoku() (1047544) | more than 6 years ago | (#23771683)

Second that! How is secure text == to DRM? Weird. Oh, here's my ob. on-topic blob:

I once looked at the future, but it was so bright that I had to put on shades.

He is just to old (0)

Anonymous Coward | more than 6 years ago | (#23771547)

Old people often see new things as being unnatural and weird.

I don't think that a transhuman intelligence would spell the end of humanity any more than humanity spelled the end of mammals. In fact, in my own crystal ball, I don't see super machines that are smarter than we are so much as super mind-machine networks (matrix style) in which genuine consciousness is provided by jacked-in human brains (jointly forming a metabrain). We will be cells in its body, and can expect to be treated as such.

Personally, I can't wait.

Re:He is just to old (1)

glittalogik (837604) | more than 6 years ago | (#23772401)

I suspect you may have read one too many Arthur C Clarke short stories - artifical intelligence and artificial emotion are far from mutually inclusive by default. However, I agree with you to the extent that humans should maintain some level of compassion/respect even for inanimate objects, if only because we need the practice.

There is hope though, check out R is for Robot [wired.com] for some interesting insights into human/machine interaction.

Re:He is just to old (1)

glittalogik (837604) | more than 6 years ago | (#23772417)

Fuck, replied to wrong comment, please ignore previous! What I was going to say to you was that I stongly suspect we'll look back on the Singularity as the end of pre-humans, not humans. It's scary, but I'm looking forward to it.

Re:Hail to the robots (1)

jelizondo (183861) | more than 6 years ago | (#23771645)

Statistically, they are already more intelligent...


Just see how many people voted for Bush twice


Re:Hail to the robots (0)

Anonymous Coward | more than 6 years ago | (#23771783)

You mean roughly 2% of the population of Earth?

Re:Hail to the robots (2, Interesting)

khayman80 (824400) | more than 6 years ago | (#23771733)

I'll have to side with Hofstadter about AI being undesirable, but for different reasons. Most people seem to be worried about artificial intelligences rebelling against us and abusing us. I'm not. I'm worried about humans abusing the artificial intelligences.

I think that most people who want AI for pragmatic reasons are essentially advocating the creation of a slave race. You think companies/governments are going to spend billions of dollars creating an AI, and then just let it sit around playing Playstation 7 games? I doubt it. They'd likely want a return on their investment, and they'd force the program to do their bidding in some manner (choosing stocks, acting as intelligent front ends for advanced semantic search engines, etc). Maybe this would involve an imperative built into the AI at ground level: "obey your masters", or it could be more obviously sinister like a pain/pleasure reward system like the ones used to control human slaves.

Do you think that mainstream society would find this as repugnant as I do? I doubt it. Most people seem to find it difficult to empathize with other humans who have a different skin color, a different religion, or a different sexual orientation. If Average Joe doesn't care about the individual rights of people in Gitmo, he's certainly not going to care about the individual rights of a computer program- which is not even a biological life form.

I would say that any serious AI research needs to be preceded by widespread legislation expanding the definition of individual rights (abandoning the "human rights" label as anachronistic along the way). We need to insure that all sapient beings- organic or digital- have guaranteed rights. Until then, I think AI researchers are badly misguided- they're naive idealists working towards a noble goal, without considering that they're effectively working to create a new slave race...

Re:Hail to the robots (2, Insightful)

glittalogik (837604) | more than 6 years ago | (#23772453)

I suspect you may have read one too many Arthur C Clarke short stories - artifical intelligence and artificial emotion are far from mutually inclusive by default. However, I agree with you to the extent that humans should maintain some level of compassion/respect even for inanimate objects, if only because we need the practice.

There is hope though, check out Wired's R is for Robot [wired.com] for some interesting insights into human/machine interaction.

I'm still waiting (0)

Anonymous Coward | more than 6 years ago | (#23771115)

For Kurzweil to buckle under the amount of vitamins he's taking. That'll teach him for taking the future seriously.

I liked "I am a Strange Loop" (1)

MarkWatson (189759) | more than 6 years ago | (#23771143)

The discussion of souls, should shards, etc. was not what I expected but I enjoyed this material anyway, and I enjoyed the entire book.

I like and more or less agree with Hofstadter's general take on AI also: I have been very interested in AI since the mid-1970s when I read "Mind Inside Matter", but I also appreciate the spiritual side of human life and I still look at human consciousness as a mystery although attending one of the "human consciousness and quantum mechanics" conferences sort of has me thinking that quantum effects may be part of the mystery.

Re:I liked "I am a Strange Loop" (3, Informative)

abigor (540274) | more than 6 years ago | (#23771199)

That's what mathematician Roger Penrose thinks also, in case you weren't aware. You may want to read his book "The Large, the Small, and the Human Mind".

Re:I liked "I am a Strange Loop" (1)

MarkWatson (189759) | more than 6 years ago | (#23771265)

I read Penrose's and Gardner's "The Emporer's New Mind: Concerning Computers, Minds, and the Laws of Physics" a long time ago - at the time I did not agree with him, but now I mostly do think that non-quantum digital computers "lack something" required for consciousness and Real AI(tm)

I will look at "The Large, the Small, and the Human Mind" - I just added it to my Amazon shopping cart - thanks for the recommendation.

Re:I liked "I am a Strange Loop" (2, Informative)

Doctor Morbius (1183601) | more than 6 years ago | (#23771451)

Neurons are far too large to be affected by QM effects.

Re:I liked "I am a Strange Loop" (1)

lgw (121541) | more than 6 years ago | (#23772281)

Neurons and the as-yet-fictional quantum computers are both quite different from current computers in that they are both (effectively) massively parallel. It's not unreasonable to suspect, given the complete failure to take even one step towards "real" AI (machine sentience), that this difference might matter.

Re:I liked "I am a Strange Loop" (4, Insightful)

Angostura (703910) | more than 6 years ago | (#23771327)

I found The Emperor's New Mind a remarkably irritating book. As far as I could tell, the whole tome basically boiled down to 'Consciousness is spooky and difficult to explain, Quantum effects are spooky and difficult to explain, ergo human consciousness probably has its basis in qyuantum effects'. I didn't read any of his books after that one.

I like Hofstadter a *lot* though. His book of essays from SciAm: Metamagical Themas is still woeth grabbing if you ever see a copy.

Re:I liked "I am a Strange Loop" (4, Interesting)

thrawn_aj (1073100) | more than 6 years ago | (#23772077)

You might be right about Penrose's thesis (about the mind being quantum mechanical) in the book - I have no idea, nor do I particularly care. I have read that book several times over my high school/undergrad/grad career (physics) and I have NEVER read it to the very end (so, I essentially skipped over all his ruminations on the nature of the mind :P).

BUT, I think that his chapters on math and physics and their interface (everything prior to the biology chapters) constitute the SINGLE GREATEST and only successful attempt ever to present a NON-DUMBED DOWN layperson's introduction to mathematical physics. I gained more physical and mathematical insight from that book than I did from any other source prior to graduate school. For that alone, I salute him. Popularizations of physics a la Hawking are a dime a dozen. An "Emperor's new mind" having (what I can only describe as) 'conceptual math' to TRULY describe the physics comes along maybe once in a lifetime.

His latest book is the extension of that effort and the culmination of a lifetime of thinking clearly and succinctly about math and physics. He is the only writer alive who imo has earned the right to use a title like "The road to reality: a complete guide to the laws of physics".

As for Hofstadter, GEB was merely pretty (while ENM was beautiful), but essentially useless (to me) beyond that. Perhaps it was meant as simply a guide to aesthetic appreciation, in which case it succeeded magnificently. As far as reality is concerned, it offered me no new insight that I could see. Stimulating prose though - I guess no book dealing with Escher can be entirely bad. I haven't read anything else by Hofstadter so I can't comment there.

Re:I liked "I am a Strange Loop" (2, Funny)

e2d2 (115622) | more than 6 years ago | (#23771341)

Sould shards?

NERF LOCKS!

Intelligent Beings (2, Insightful)

hawkeye_82 (845771) | more than 6 years ago | (#23771159)

I personally believe that AI will never happen with us humans at our current level of intelligence.

To build a machine that is intelligent, we need to understand how our own intelligence works. If our intelligence was simple enough to understand and decipher, we humans would be too simple to understand it or decipher it.

Ergo, we humans will never ever build a machine that is intelligent. We can build a machine that will simulate intelligence, but never actually make it intelligent.

Re:Intelligent Beings (1)

the_humeister (922869) | more than 6 years ago | (#23771279)

What's the difference between simulating intelligence and "actual" intelligence? If you can't tell the difference via your interactions, then for all intents and purposes there is no difference.

Also, it's fairly "simple" to build an intelligence without understanding how intelligence works. You can either make a whole human brain simulation, or you can go have children.

Re:Intelligent Beings (1)

Eco-Mono (978899) | more than 6 years ago | (#23771463)

I cannot prove that I have consciousness; a computer could probably simulate my failure at witty reparté on Slashdot with ease. But I do have consciousness.

I put it to you that when people talk about "actual" versus "simulated" intelligence, this is what they mean. And it certainly matters to the one who is experiencing it!

Re:Intelligent Beings (1)

the_humeister (922869) | more than 6 years ago | (#23771573)

I cannot prove that I have consciousness; a computer could probably simulate my failure at witty reparté on Slashdot with ease. But I do have consciousness.
Well there's the problem. How do I know that you have consciousness? And how do you know an artificial intelligence wouldn't have consciousness as well?

Re:Intelligent Beings (1)

Wargames (91725) | more than 6 years ago | (#23772015)

I am just wondering when the Internet will wake up. Just a few more self heeling feedback loops and links to a couple of super colliders and... Wham! Hey! What's for breakfast???

Re:Intelligent Beings (1)

Knara (9377) | more than 6 years ago | (#23771487)

It's the "moving goalposts" problem with AI. Any time something reaches some threshold of intelligence, people (for whatever reason) decide "well, that's not really intelligent behavior, what would be intelligent behavior is ".

I tend to agree that so long as the output seems intelligent, the system that produced it can also be considered reasonably intelligent.

Re:Intelligent Beings (2, Interesting)

lgw (121541) | more than 6 years ago | (#23772349)

It's not moving goalpots at all: it's a total failure to take even the smallest step towards machine sentience, but any inuitive definition. Something key is missing. It's not like we've made software that's as smart as a hamster, and now we're working on making it as smart as a dog.

The field of AI research has taken tasks that were once thought to require sentience to perform, and found ways to perform those tasks with simple sets of rules and/or large databases. Isn't even the term "AI" passe in the field now?

It's not moving the goalposts, it's simply a clarification of what sentience means: some level of self-awareness. Even a hamster has it, but no software yet does.

Re:Intelligent Beings (1)

wizardforce (1005805) | more than 6 years ago | (#23771293)

that would be true except for the fact that your intelligence is only possible because your brain is constructed of billions of cells with discrete knowable functions and mechanisms. ultimately, because it is made of a finite number of components, components that are getting better understood over time, it is only a matter of time before humans like us engineer something of equal if not superior function and design compared to our own intelligences.

Re:Intelligent Beings (2, Interesting)

e2d2 (115622) | more than 6 years ago | (#23771509)

Also to add to that, there is no requirement for us to understand the brain in depth, only that we understand how we learn, and in that respect we've come leaps and bounds over the years. Plus, let's not limit ourselves to the human brain. For instance, a dog is intelligent too. A piece of software with the intelligence of a dog could be very useful. Hell one with the decision making abilities of a bird would be a nice start. And on and on..

On a tangent:
Intelligence is such a broad word, and then to tack on Artificial. AI lacks a precise meaning and if anything needs to be done in the world of AI, it's to create a nomenclature that makes sense and provides a protocol of understanding.

For many the word AI simply means "human brain in a jar" but that's just one small branch of AI sciences. But where is our Fujita Scale of artificial intelligence? Where is out toolkit of language (outside of mathematics)?

I ask this seriously btw, if any of you know about work on this please post a response.

Re:Intelligent Beings (1)

lgw (121541) | more than 6 years ago | (#23772385)

I prefer the term "machine sentience" for this thing we've so far failed to create, or even just "self awareness". We can write software to do all sorts of difficult tasks, but nothing so far that's smart in the way that even a dog is smart.

Re:Intelligent Beings (1)

lelitsch (31136) | more than 6 years ago | (#23772437)

For instance, a dog is intelligent too. A piece of software with the intelligence of a dog could be very useful. Hell one with the decision making abilities of a bird would be a nice start. And on and on..
I prefer living in a world where my laptop doesn't chase cats. Or a world where my laptop doesn't try to fly away when it sees a cat.

Re:Intelligent Beings (0)

Anonymous Coward | more than 6 years ago | (#23771295)

I personally believe that AI will never happen with us humans at our current level of intelligence.

To build a machine that is intelligent, we need to understand how our own intelligence works. If our intelligence was simple enough to understand and decipher, we humans would be too simple to understand it or decipher it.

Ergo, we humans will never ever build a machine that is intelligent. We can build a machine that will simulate intelligence, but never actually make it intelligent.

Especially, when we define intelligence in such narrow terms as to what is required for scholastic success and even then it grossly misses the mark. Why, Feynman had an IQ of "only" 120.

Re:Intelligent Beings (1)

CrazyJim1 (809850) | more than 6 years ago | (#23771397)

AI is simple really. All you need to do is get over the vision problem. The vision problem is digitizing the world into a 3d imagination space. Once you have sensors that can digitize the world, all sorts of robots will pop up. Arguably someone may raise the point that programming a computer isn't AI, but programming is sort of like how robots learn like Neo in the Matrix. If you want a robot to learn on its own, you'll have to give it trust algorithms to know if it is being told the truth or not. You see a lot of this now still with Wikipedia. In school we're drilled that the teachers are always right so we can learn that way, but AI will learn a bit differently.

Anyway if you want a quick overview of AI done simple:www.fossai.com [fossai.com]

Re:Intelligent Beings (1)

timeOday (582209) | more than 6 years ago | (#23771891)

To build a machine that is intelligent, we need to understand how our own intelligence works.
I don't think so, any more than we had to understand how birds flew to make airplanes, or how our muscles work to make an internal combustion engine. I don't know why so many people believe copying humans is the shortest path to AI.

Re:Intelligent Beings (1)

prockcore (543967) | more than 6 years ago | (#23771953)

True...

the problem is that we're buggy and we make buggy things.

The most advanced game for the PC right now:
http://youtube.com/watch?v=GNRvYqTKqFY [youtube.com]

Re:Intelligent Beings (1)

JambisJubilee (784493) | more than 6 years ago | (#23772051)

To build a machine that is intelligent, we need to understand how our own intelligence works. If our intelligence was simple enough to understand and decipher, we humans would be too simple to understand it or decipher it.

It is a fallacy to think that humans cannot create something more intelligent than ourselves. The creation of AI is analogous to the creation of another human: you don't give the being intelligence, rather you give it the ability to obtain intelligence from its experiences. You don't even need to know how it works!

That's the beauty of programs that can adapt/self modify.

Re:Intelligent Beings (1)

Valtor (34080) | more than 6 years ago | (#23772071)

We can build a machine that will simulate intelligence, but never actually make it intelligent.
From the full article

You claim that an âoeIâ is nothing but a myth, a hallucination perceived by a hallucination.
His book is all about saying that our consciousness is just an hallucination.

Re:Intelligent Beings (1)

Farenji (1306493) | more than 6 years ago | (#23772121)

The key is *learning* here. A master mason/fighter/chessplayer/whatever *can* teach someone who will be better than his/her teacher. The only thing we have to do is find out how learning works exactly, and than it's just a matter of time. Computing power is not an issue, nor is memory or storage. Imagine a huge, HUGE neural network fitted with the ultimate learning algorithms - it will beat humankind easily. Problem is that the process of learning and adaptation is far from trivial. But I'm fairly confident we'll be able to solve this mystery sooner or later. I agree with you though that we won't be able to understand our own creation. We'll be creating enigmas. I'm not sure whether that will be a good thing....

Re:Intelligent Beings (0)

Anonymous Coward | more than 6 years ago | (#23772377)

I personally believe that AI will never happen with us humans at our current level of intelligence.
Unit 4298700A3 aggrees with you.

Kind of a strange response really (4, Interesting)

the_humeister (922869) | more than 6 years ago | (#23771161)

Am I disappointed by the amount of progress in cognitive science and AI in the past 30 years or so? Not at all. To the contrary, I would have been extremely upset if we had come anywhere close to reaching human intelligence â" it would have made me fear that our minds and souls were not deep. Reaching the goal of AI in just a few decades would have made me dramatically lose respect for humanity, and I certainly don't want (and never wanted) that to happen.
Hehe, you mean all the nasty things humanity has done to each other hasn't made you lose respect?

I am a deep admirer of humanity at its finest and deepest and most powerful â" of great people such as Helen Keller, Albert Einstein, Ella Fitzgerald, Albert Schweitzer, Frederic Chopin, Raoul Wallenberg, Fats Waller, and on and on. I find endless depth in such people (many more are listed on [chapter 17] of I Am a Strange Loop), and I would hate to think that all that beauty and profundity and goodness could be captured â" even approximated in any way at all! â" in the horribly rigid computational devices of our era.
When you boil it down, humans are just collection carbon, nitrogen, oxygen, and hydrogen (and some other trace elements). What difference does it make if an intelligence is made of mostly "natural" carbon entities vs. mostly "unnatural" silicon entities?

Re:Kind of a strange response really (1)

amyhughes (569088) | more than 6 years ago | (#23771439)

When you boil it down, humans are just collection carbon, nitrogen, oxygen, and hydrogen (and some other trace elements). What difference does it make if an intelligence is made of mostly "natural" carbon entities vs. mostly "unnatural" silicon entities?

Thinking by the numbers. Comparing specs. That's the same dispassionate thinking that insists a PC is a better value than a Mac and an iPod is lame because...well, you know the joke. The same thinking that values speed and power and low cost and cool and hasn't a clue about grace or elegance or beauty or class.

Re:Kind of a strange response really (1)

statemachine (840641) | more than 6 years ago | (#23771515)

Hehe, you mean all the nasty things humanity has done to each other hasn't made you lose respect?
and

When you boil it down, humans are just collection carbon, nitrogen, oxygen, and hydrogen (and some other trace elements). What difference does it make if an intelligence is made of mostly "natural" carbon entities vs. mostly "unnatural" silicon entities?
Humans are tempered by their huge vulnerabilities. It does not take much at all to turn us "off". I don't have much faith in a human created intelligent, sentient robot with very few vulnerabilities. You can still stop and kill a human criminal.

I don't expect much, but my hope is that most of these new robots will want to work with us (and protect us) rather than enslave or exterminate us. Humans throughout history have ruled through the ultimate threat of violence. Why would robots be different?

Think about it. If there were few or no repercussions for your bad actions, would you stop doing them?

Re:Kind of a strange response really (1)

the_humeister (922869) | more than 6 years ago | (#23771601)

Think about it. If there were few or no repercussions for your bad actions, would you stop doing them?
Of course not! If there's anything that MMORPGs have shown us, well that's it.

Re:Kind of a strange response really (1)

mmdog (34909) | more than 6 years ago | (#23771771)

+1 Insightful!

Re:Kind of a strange response really (1)

naoursla (99850) | more than 6 years ago | (#23771577)

When you boil it down, humans are just collection carbon, nitrogen, oxygen, and hydrogen (and some other trace elements). What difference does it make if an intelligence is made of mostly "natural" carbon entities vs. mostly "unnatural" silicon entities?


Be nice to the person with future shock. It takes some people a little time to come around to the idea that the only thing are ultimately good for is serving as a daemon in some obscure subsystem of a weakly godlike intelligence.

And then it takes even longer for them to accept the idea that they are already serving as a daemon in some obscure subsystem of a strongly god-like intelligence.

Re:Kind of a strange response really (0)

Anonymous Coward | more than 6 years ago | (#23771705)

Real depressing if you're looking for employment in the field like me, however. No new developments in the past twelve years? Sign me up.

End of *this* human life... (4, Interesting)

lenski (96498) | more than 6 years ago | (#23771245)

I agree with Douglas, I expect I would be uncomfortably unfamiliar in a world shared with AI beings. Then again, based on my understanding of Kurzweil's Singularity, it's unlikely to affect me much: I plan to live out my life in meatspace, where things will go on much as before.

(Also according to my understanding of Kurzweil's projections,) It's worth noting however, that for those willing to make the leap, much of the real growth and advancement will occur in Matrix-space. It's an excellent way to keep "growing" in power and complexity without using more energy that can be supplied by the material world.

Here's my analogy explaining this apparent paradox: Amphibians are less "advanced" than mammals, but still live their lives as they always have, though they are now food for not only their traditional predators but mammals too. ...And pollution and loss of habitat, but through all that, they still live amphibian lives.

In fact, I can't help but wonder how many of us will even recognize when the first AI has arrived as a living being. Stretching the frog analogy probably too far: What is a frog's experience of a superior life form? I am guessing "not-frog". So I am guessing that my experience of an advanced AI life-form is "whatever it does, it/they does it bloody fast, massively parallel, and very very interesting...". Being in virtual space though, AI "beings" are likely only to be of passing interest to those who remain stuck in a material world, at least initially.

Another analogical question: Other than reading about the revolution in newspapers of the day, how many Europeans *really experienced* any change in their lives during the 10 years before or the 10 years after the American revolution? We know that eventually, arrival of the U.S. as a nation caused great differences in the shape of the international world, but life for most people went on afterward about the same as before. The real action was taking place on the boundary, not in the places left behind.

(Slightly off topic: This is why I think derivatives of Second Life type virtual worlds will totally *explode* in popularity: They let people get together without expending lots of jet fuel. I believe virtual world technology IS the "flying car" that was the subject of so many World's Fair Exhibits during the last century.)

Re:End of *this* human life... (5, Interesting)

Zarf (5735) | more than 6 years ago | (#23771709)

The short answer is that Hofstadter and Kurzweil are both wrong. I think Kurzweil's technological development arcs (all those need exponential curves) probably are disturbingly correct. And Hofstadter is probably right about souls being far more complex things than what Kurzweil believes.

So they are both right in ways and wrong in ways. The real rub is that Kurzweil's future is probably farther away but not for the reasons that Hofstadter thinks. The real reasons are probably based in bad technology decisions we made in the last century or two.

We (humanity) have made several technological platform choices that are terrifyingly hard to change now. These choices drove us down a path that we may have to abandon and thus suffer a massive technological set back. In specific the choices were oil, steel, and electricity.

Oil (fossil fuels) will run out. Steel (copper too) is growing scarcer. Electricity is too hard to store and produce (and heats silicon rather inconveniently). Data centers today are built with steel and located near power plants that often produce power using fossil fuel. That means even a Data Center driven life will be affected by our platform limitations.

When we start hitting physical limits to what we can do with these, how much of these supplies we can get, then we will be forced to conserve, change, or stop advancing. Those are very real threats to continued technological advancement. And they don't go away if you hide in Second Life.

Show me a Data Center built with ceramic and powered by the sun or geo-electric sources and I'll recant.

Re:End of *this* human life... (0)

Anonymous Coward | more than 6 years ago | (#23771821)

Of course, what is the purpose of living in meat-space if you have the capabilities to simulate it?

While we do not know when this will happen (50-100 years?). I think it's highly probable that a perfect virtual reality will be tempting enough for most life to abandon their meat based reality. Why live a life with limitations?

Perhaps this is the answer to the fermi paradox... once civilisation reaches the technological capabilities they escape the Universe into their own Universes.

Re:End of *this* human life... (1)

Cadallin (863437) | more than 6 years ago | (#23771839)

Another analogical question: Other than reading about the revolution in newspapers of the day, how many Europeans *really experienced* any change in their lives during the 10 years before or the 10 years after the American revolution? We know that eventually, arrival of the U.S. as a nation caused great differences in the shape of the international world, but life for most people went on afterward about the same as before. The real action was taking place on the boundary, not in the places left behind.

It depends on who you were and where you were. If you were Louis XVI, the effects were pretty radical and immediate, and the experience was shared to a variety of degrees by quite a lot of the people of France.

The further impact of the French Revolution was felt quite acutely by much of Europe, and was for decades (The Napoleonic Wars, major political upheaval, military drafts, injuries, death, etc.) Remember, the American Revolution can be very clearly implicated as a major causative factor in the French revolution (although there were many others).

Louis XVI was executed (January, 1793) very nearly a decade after the official end of the American Revolution (Treaty of Paris: September, 1783).

Not to say I think you are wrong, but I don't think that analogy is a very good reason.

Re:End of *this* human life... (2, Insightful)

timeOday (582209) | more than 6 years ago | (#23771935)

Here's my analogy explaining this apparent paradox: Amphibians are less "advanced" than mammals, but still live their lives as they always have, though they are now food for not only their traditional predators but mammals too. ...And pollution and loss of habitat, but through all that, they still live amphibian lives.
How do you know we're analogous to amphibians instead of dinosaurs?

Re:End of *this* human life... (0)

Anonymous Coward | more than 6 years ago | (#23771977)

I suspect the French would say they experienced significant change in 1789. Depending on the way you count it, that was 6-8 years after the American Revolution ended.

End of 1st phase of human life maybe (1)

Morgaine (4316) | more than 6 years ago | (#23772447)

To say that the advent of powerful AI will spell the end of human life is to misunderstand humanity.

Humanity is an evolving entity, not a static and stagnant one. It has been evolving beyond its mere animal origins for a long time now, and the rate of change is increasing exponentially in step with our mastery of technology. In fact, through our intelligence, we have taken control of evolution away from nature (a very haphazard director at best), and are beating a path towards a very engineered and steadily improving future humanity. It's a continuous process, and it continually redefines humanity through tiny improvements which are often seen as mere remedial changes, such as vitamins and denture correction.

This isn't going to stop, and the machinery of the brain is no exception to this. Yes it's complex, but so is everything else. We're not put off by complexity. In fact, it's a strong driving force for study and mastery, a challenge for our technological capability. While I understand that some people have a natural preference for the things and ways of the past, some of us look forward very optimistically to a future humanity that would be unrecognizable today, a humanity that is physically more robust and capable, mentally expanded through integration with computing machinery, more logical, and far less driven by animal instincts, delusion and hysteria.

It might be valid to claim that technology spelled the end of the 1st phase of human life, perhaps, but that happened a long time ago now, whichever way you measure it. We're nothing like the initial homo sapiens that nature conjured up. And good riddance too. The fleas were probably annoying.

Overlords? (5, Funny)

CarAnalogy (1191053) | more than 6 years ago | (#23771263)

Hofstadter, for one, does _not_ welcome our new AI overlords.

I figure we'll get superhuman AI . . . . (0)

StefanJ (88986) | more than 6 years ago | (#23771345)

. . . at about the same time as we get . . . um, excuse me, doorbell . . .

  . . . oh, shit, a UPS guy in a flying truck just delivered a jet belt and a robot ma[USER STEFANJ UNDERGOING UPGRADE]

hmmm (1)

thatskinnyguy (1129515) | more than 6 years ago | (#23771413)

In the impending robot wars, this guy will be hailed as a champion of humanity. Or just be the guy who said "I told ya so!".

Obligatory xkcd plug. [xkcd.com]

Boooring! (0, Troll)

r_jensen11 (598210) | more than 6 years ago | (#23771521)

I watch TV, I know what's going on, and quit trying to fool us! Forget whatever Douglas Hofstadter says, the future is now!

Deep admiration for humanity. (0, Troll)

jwiegley (520444) | more than 6 years ago | (#23771541)

I am a deep admirer of humanity at its finest and deepest and most powerful â" of great people such as Helen Keller, Albert Einstein, Ella Fitzgerald, Albert Schweitzer, Frederic Chopin, Raoul Wallenberg, Fats Waller, and on and on.

There's an optimist for you... 6.6+ billion humans and he can name but a dozen he admires (less than 0.0000002% of the population, all of whom are dead by the way) and then draws the conclusion that humanity as a whole is worthy of deep admiration.

I would argue that is not a very scientifically accurate conclusion based on the evidence available.

Re:Deep admiration for humanity. (1)

Penguinisto (415985) | more than 6 years ago | (#23771641)

Dude... listing a quorum of humanity as basis for saying they're a pretty cool species? That would get kind of tedious, you know? I mean, yeah most folks don't RTFA as it is, but do we really want to encourage the act of not R'ing TFA?



  I think a monumental list of "enough finest people on Earth to satisfy every skeptic alive that People Are Basically Admirable" would be a bit of a buzzkill (and probably bring out the worst in some people...)

this will spell an end to human life... (-1, Offtopic)

Anonymous Coward | more than 6 years ago | (#23771633)

if nigger obama gets into office the earth will quickly turn into the planet of the apes.

We are the robots (1)

burnitdown (1076427) | more than 6 years ago | (#23771745)

We're charging our battery
And now we're full of energy
We are the robots
We are the robots
We are the robots
We are the robots

We're functioning automatic
And we are dancing mechanic
We are the robots
We are the robots
We are the robots
We are the robots

Ja tvoi sluga, (I'm your slave)
ja tvoi Rabotnik (I'm your worker.)

we are programmed just to do
anything you want us to
we are the robots
we are the robots
we are the robots
we are the robots

we're functioning automatic
and we are dancing mechanic
we are the robots
we are the robots
we are the robots
we are the robots

Ja tvoi sluga, (I'm your slave)
ja tvoi Rabotnik (I'm your worker.)

Ja tvoi sluga, (I'm your slave)
ja tvoi Rabotnik (I'm your worker.)

[repeat to fade]
We are the robots

The future you deserve (0)

gd23ka (324741) | more than 6 years ago | (#23771797)

The future you're looking at is one of ever decreasing living of standard to the point of having to hungry in tents,
a vast increase in mortality across due to malnutrition and vitamin C deficiency, an even greater drop in male
fertility as water supplies are tainted by even more artifical estrogens. Expect a lot of wars, epidemics, forced
innoculations, radioactive material releases and be prepared to have your jaw broken with a steel rod because you couldn't
produce identification fast enough for the officer or because you have been hoarding food or Mother Gaia forbid(!)
hunting rats.

Here's your future, punk moron, and it is one YOU RICHLY DESERVE for sitting on your ass and watching TV.
.
 

The missing key to AI (1)

mrkitty (584915) | more than 6 years ago | (#23771869)

Cognitive science is something many AI people don't consider. What makes up the human mind? Are emotions really needed? I've recently started a blog dealing with these sorts of things that a few of you may find interesting. http://www.eatheists.com/2008/05/the-challenge-of-mind-duplication-and-transfer/ [eatheists.com]

Come on... (0)

Anonymous Coward | more than 6 years ago | (#23771997)

From the sucking article: [quote]"Do I still believe it will happen someday? I can't say for sure, but I suppose it will eventually, yes. I wouldn't want to be around then, though."[/quote] I am sorry for saying, but based on your own affirmation, then I hope you die soon enough, because I want to be here when FULL AI happens. And come on, you clearly are not even close to be an expert on this domain, so please shut up.

Cyborgs, not AI (4, Interesting)

Ilyakub (1200029) | more than 6 years ago | (#23772201)

I am far more interested in digitally enhancing human bodies and brains than creating a new AI species.

Consider this: throughout the eons of natural and sexual selection, we've evolved from fish to lizards, to mammals, to apes, and eventually to modern humans. With each evolutionary step, we have added another layer to our brain, making it more and more powerful, sophisticated and most importantly, more self-aware, more conscious.

But once our brains reached the critical capacity that allows abstract thought and language, we've stepped out of nature's evolutionary game and started improving ourselves through technology: weapons to make us better killers, letters to improve our memory, mathematics and logic to improve our reasoning, science to go beyond our intuitions. Digital technology, of course, has further accelerated the process.

And now, without even realizing it, we are merging our consciousness with technology and are building the next layer in our brain. The more integrated and seamless communication between our brains and machines will become, the closer we get to the next stage in human evolution.

Unfortunately, there is a troubling philosophical nuance that may bother some of us: how do you think our primitive reptilian brain feels about having a frontal lobe stuck to it, controlling its actions for reasons too sophisticated for it to ever understand? Will it be satisfying for us to be to our digital brain as our primitive urges and hungers are to us?

Most people are stupid (1)

ziah (1095877) | more than 6 years ago | (#23772331)

I think it'd be very difficult to simulate a mind like Einstein's, but to simulate your average person seems like it'd be do-able. You forget that most people are actually pretty stupid. They have very simplistic rules in their mind and their end goal is really just to make money. Creating a system that would simulate a dumb person doesn't seem like it'd be that difficult to do. Insert probabilities of things occuring into the system, start off with "I don't know anything", and then let it build it's knowledge itself. I graduated with a degree in Cognitive Science w/ an emphasis in computer science from Berkeley, so it's essentially Artificial Intelligence. Dumb people would not be hard to mimic. The hard part comes in recognition of the real world by the computer

Re:Most people are stupid (1)

ziah (1095877) | more than 6 years ago | (#23772363)

Intelligence does not equal knowledge Knowledge is what you know Intelligence is pattern matching The set of patterns to choose from is knowledge The more knowledge you have, the larger set of patterns to choose from
Load More Comments
Slashdot Login

Need an Account?

Forgot your password?
or Connect with...

Don't worry, we never post anything without your permission.

Submission Text Formatting Tips

We support a small subset of HTML, namely these tags:

  • b
  • i
  • p
  • br
  • a
  • ol
  • ul
  • li
  • dl
  • dt
  • dd
  • em
  • strong
  • tt
  • blockquote
  • div
  • quote
  • ecode

"ecode" can be used for code snippets, for example:

<ecode>    while(1) { do_something(); } </ecode>