×

Welcome to the Slashdot Beta site -- learn more here. Use the link in the footer or click here to return to the Classic version of Slashdot.

Thank you!

Before you choose to head back to the Classic look of the site, we'd appreciate it if you share your thoughts on the Beta; your feedback is what drives our ongoing development.

Beta is different and we value you taking the time to try it out. Please take a look at the changes we've made in Beta and  learn more about it. Thanks for reading, and for making the site better!

Marvin Minsky On AI

CowboyNeal posted more than 7 years ago | from the dreaming-of-electric-sheep dept.

Programming 231

An anonymous reader writes "In a three-part Dr. Dobbs podcast, AI pioneer and MIT professor Marvin Minsky examines the failures of AI research and lays out directions for future developments in the field. In part 1, 'It's 2001. Where's HAL?' he looks at the unfulfilled promises of artificial intelligence. In part 2 and in part 3 he offers hope that real progress is in the offing. With this talk from Minsky, Congressional testimony on the digital future from Tim Berners-Lee, life-extension evangelization from Ray Kurzweil, and Stephen Hawking planning to go into space, it seems like we may be on the verge of another AI or future-science bubble."

cancel ×
This is a preview of your comment

No Comment Title Entered

Anonymous Coward 1 minute ago

No Comment Entered

231 comments

What makes Doritos chips so good? (-1, Offtopic)

Anonymous Coward | more than 7 years ago | (#18203368)

I just finished a bag of baked Doritos.

Why don't we ever discuss things that matter to us?

another one? (2, Funny)

mastershake_phd (1050150) | more than 7 years ago | (#18203372)

Did I miss the first AI bubble? Was it that chess playing computer?

WHAT COMPUTERS STILL CAN'T DO (2, Informative)

pbn1986 (1070436) | more than 7 years ago | (#18203396)

HA!....you should read Hubert Dreyfus, "What Computers Still Can't Do"....it chronicles a 20 year debate with Minsky that A.I., as Minsky professes it, will never work on philisophical grounds. A very compelling argument...can't wait to hear his story now.

Re:another one? (4, Insightful)

SnowZero (92219) | more than 7 years ago | (#18203498)

While much of the "traditional AI" hype could be considered dead, robotics is continuing to advance, and much symbolic AI research has evolved into data-driven statistical techniques. So while the top-down ideas that the older AI researches didn't pan out yet, bottom-up techniques will still help close the gap.

Also, you have to remember that AI is pretty much defined as "the stuff we don't know how to do yet". Once we know how to do it, then people stop calling it AI, and then wonder "why can't we do AI?" Machine vision is doing everything from factory inspections to face recognition, we have voice recognition on our cell phones, and context-sensitive web search is common. All those things were considered AI not long ago. Calculators were once even called mechanical brains.

Re:another one? (0)

Anonymous Coward | more than 7 years ago | (#18203580)

Uhm.. voice recognition and speech-to-text do NOT work. We've got QUITE A WAYS to go.

Re:another one? (3, Interesting)

ricree (969643) | more than 7 years ago | (#18203912)

Uhm.. voice recognition and speech-to-text do NOT work. We've got QUITE A WAYS to go.
It really, really depends on what you mean by doesn't work. At least some voice recognition has been used in consumer products for a while now. For example, my (now ~2 or 3 year old) phone is capable of voice activation for many of its functions, and in the times I've used it I've had no problems with it.

Re:another one? (1)

NitsujTPU (19263) | more than 7 years ago | (#18203998)

While you're right, I'm not sure that people really consider traditional AI to be dead.

Certainly it's been said that once we know how to do it, then people stop calling it AI. I think that Ronald Brachman even said something similar in his address to the AAAI a few years ago, but then, we can see at AAAI many examples of what can be considered traditional AI. I think that most of the participants in the "new AI," IE, behavioral-based systems, robotics, and related techniques consider themselves developing their approaches, certainly in contrast to other researchers who still pursue traditional techniques.

Certainly, machine learning has become more widely used, but I wouldn't really consider that part of the "new AI," though, perhaps that's not quite what you're driving at, but yes, certainly, it's become one of the most popular components in modern research. Of course, this seems to have been something that transitioned in slowly, and symbolic approaches are still alive and well.

Re:another one? (1, Funny)

Anonymous Coward | more than 7 years ago | (#18203616)

Did I miss the first AI bubble? Was it that chess playing computer?
Let me see...

AI: Hello. My name is AI. Tell me, what's on your mind?

Anonymous Coward: Did I miss the first AI bubble?

AI: Why do you mention computers?

Anonymous Coward: Was it that chess playing computer?

AI: Are you talking about me in particular?

Anonymous Coward: No, I'm talking about Marvin Minsky's podcast.

AI: How long have you been talking about Marvin Minsky's podcast?

Anonymous Coward: I have been talking about Marvin Minsky's podcast since it was duped on Slashdot.

AI: Go ahead. I'm listening...

Re:another one? (1, Insightful)

Anonymous Coward | more than 7 years ago | (#18204938)

We know about ELIZA. Try something new.

Re:another one? (1)

NitsujTPU (19263) | more than 7 years ago | (#18203954)

The first AI bubble was actually covered in a class that I took. In the 1980s, businesses began adopting artificial intelligence for some of their operations. When businesses saw the limits of what the AI of the time could do, they lost interest, and it "bubbled." However, many of the systems from that era endure. Examples of systems from this time include expert systems that are used in tech support (didn't see that one coming, did ya!) and systems used in financial modeling, part of why computer scientists can still find tons of money in the financial sector if they've got advanced degrees.

Re:another one? (2, Insightful)

Architect_sasyr (938685) | more than 7 years ago | (#18204092)

didn't see that one coming, did ya!

Having been on the receiving end of some of the larger telcos support system, and considering the "quality" of so-called "AI" systems today, I would have to suggest that it was about the only thing I saw coming ;)

Re:another one? (1, Informative)

Anonymous Coward | more than 7 years ago | (#18204392)

The first AI bubble was actually covered in a class that I took. In the 1980s...
Correct *me* if I am wrong, but if your textbook claims that the first AI bubble started in the 1980s it's pretty wrong. It started way earlier with e.g. attempts at automated translation, SHRDLU, etc. You could always check out the books "What Computers Can't Do" and "What Computers Still Can't Do", written by Dreyfus in the 70s.

Its 2001. Where's HAL? (4, Funny)

patio11 (857072) | more than 7 years ago | (#18203380)

This professor doesn't need AI, he needs a time server. Now.

Re:Its 2001. Where's HAL? (1, Interesting)

Anonymous Coward | more than 7 years ago | (#18203402)

In the 80's there was a big push for AI driven systems called "Expert Systems" that would do things like attempt to diagnose diseases from a list of symptoms, etc.

Re:Its 2001. Where's HAL? (2, Interesting)

Creepy Crawler (680178) | more than 7 years ago | (#18203428)

The AMA eventually lead that system on its way out, claiming that physicians have some sort of sixth sense on "really bad things", unlike what you would input into a computer.

Of course, they are the ones that OK devices like that (well, input into the FDA) and they are also lobbying for higher status and power and pay for their doctors. No wonder tech like that is essentially banned.

Re:Its 2001. Where's HAL? (1)

lysergic.acid (845423) | more than 7 years ago | (#18203844)

that would be so awesome. i'd be able to get prescription painkillers, sedatives, and other tightly controlled drugs so easily then! it's easy to see why doctors don't trust the diagnosis of diseases to computers. you can already look up symptoms online at sites like webmd, etc. but to make it a trusted establishment to replace doctors with would be foolish--and not just for the reason in the example above.

Re:Its 2001. Where's HAL? (2, Informative)

Anonymous Coward | more than 7 years ago | (#18203598)

Expert systems are another example of something that was once considered "AI" and is now just another app. Your auto mechanic probably uses an expert system in his diagnostics. In medicine, it sees limited use, mostly just to sanity-check a physician's diagnosis (for example, spasmodic coughing probably isn't symptomatic of glaucoma). The pharmacological expert systems would also have been considered AI 30 years ago, but now it's just a bunch of rules.

Re:Its 2001. Where's HAL? (1)

Tablizer (95088) | more than 7 years ago | (#18203808)

This professor doesn't need AI, he needs a time server.

That would probably be slashdotted also, and we'd be stuck in the 70's with weird hair, bad sunshine music, and plaid pants.
       

I know 7 and 1 look similar in some fonts.. (2, Insightful)

QuantumG (50515) | more than 7 years ago | (#18203390)

so I'll say this another way.. thanks for the podcasts from SIX YEARS AGO.

in 2001, *indeed* (2, Informative)

randal23 (872971) | more than 7 years ago | (#18203584)

Mod parent up. The podcast is from 2007, but the talk was "given by Minsky in 2001" (quote from the podcast).

Re:in 2001, *indeed* (3, Interesting)

QuantumG (50515) | more than 7 years ago | (#18203604)

the videos were on Slashdot in 2003.. it's the one where he says stupid autonomous robots are a waste of time.

Erm.. (4, Interesting)

Creepy Crawler (680178) | more than 7 years ago | (#18203398)

Go read Kurzweil's book. He does not directly advocate life expansion. He instead advocates the Singularity.

Our bodies are made up of neurons. Does 1 neuron make us "us"? No. What if each of our brains were linked to a global consciousness. Then each human would be but a neuron..

In essence, we would wake a God.

Re:Erm.. (4, Interesting)

melikamp (631205) | more than 7 years ago | (#18203476)

Or... Borg?!?

Re:Erm.. (1)

Creepy Crawler (680178) | more than 7 years ago | (#18203714)

No no no. The Borg was much more of an allegory to Communism. Also note that they were partially made of flesh. They were the ultimate consumer whose units would willingly die for the "greater good". Individuality meant nothing, and later on in the ST:TNG universe, a simple act of giving a unit Borg a name was actually a devastating virus. Hue was "his" name.

Instead, the Singularity indicates that we all humans will be made of much more durable substrates (diamondoid processors) and will require nothing more than energy and metal to do anything. The idea is we could simultaneously link to create a super-consciousness entity. We will not need to raze worlds for "food", nor will we need to destroy other civilizations to prevent them from "killing" us.

To get an idea what this world might become, go read what Greg Egan authors. His world of sci-fi is what I see from the result of what Kurzweil mentions. I hope that our world will turn out like those, and hopefully stay away from the likes of Neuromancer and Blade Runner like dystopian cultures.

Re:Erm.. (1, Insightful)

Anonymous Coward | more than 7 years ago | (#18203784)

I rather thought the Borg was an allegory for Manifest Destiny and the American culture as seen by Native Americans. They practically spelled it out when Picard's telling them 'we don't want to be assimilated! we have our own culture!' and the borg declare human culture to be irrelevant.
  The borg are a technologically superior amalgam of different peoples, cultures, and technologies who demand you abandon your 'backward' way of life and individuality, and insist that you instead adopt their culture, for your own good. That's the united states during the nineteenth century, not the soviets.

Re:Erm.. (0)

Anonymous Coward | more than 7 years ago | (#18204138)

British, American, German, Dutch, French, Soviet, Chinese. It's all imperialism.

Re:Erm.. (0, Insightful)

Anonymous Coward | more than 7 years ago | (#18203740)

Singularity is the nerd version of the rapture.

Seriously, that is some stupid ass shit. Right now, neuroscientists don't fucking know how memories are stored, and you think we'll be hooking brains into the internet or some shit? It's a completely faith based proposition with no evidence for it at all.

Re:Erm.. (2, Insightful)

lysergic.acid (845423) | more than 7 years ago | (#18204206)

it's an idea/concept, not a belief system. just like "god" is a concept, and i can use/reference that concenpt without subscribing to a particular belief system. i've always found the concept of a godhead machine to be interesting to think about. i don't know if it'll ever happen, and i don't know if it'd even work, but it certainly incorporates some really interesting premises on the nature of the universe, life, information, and humanity.

Re:Erm.. (1)

Tablizer (95088) | more than 7 years ago | (#18203834)

Our bodies are made up of neurons. Does 1 neuron make us "us"? No. What if each of our brains were linked to a global consciousness. Then each human would be but a neuron..

I was promised I would be a pancreas. Damned salesman!
   

Re:Erm.. (1)

lindseyp (988332) | more than 7 years ago | (#18204052)

I was told variously that I was a dick, a tit, and an arsehole.

Neuron doesn't have the same ring to it, somehow.

Re:Erm.. (0)

Anonymous Coward | more than 7 years ago | (#18204224)

When will Skynet become selfaware?

Re:Erm.. (1)

edschurr (999028) | more than 7 years ago | (#18204330)

Consciousness seems to be a mechanism of a part of the brain: it's largely turned off at night, perhaps so memory book-keeping can be done, and then when it's back on it does higher-level decision making, although it gets offered its ideas from a lower-level. Why assume it's isomorphic to a behaviour of a so-called global "brain"?

As to what consciousness specifically is, it doesn't matter for the post. I just want to question the use of the term. Even if a global-level behaviour (based on tangible communication and such) mimicked consciousness, we're still lackey neurons confined to Earth.

(No Kurzweil for me...philosophy is a low, low priority.)

singularity is a bunch of nonsense (2, Interesting)

sentientbrendan (316150) | more than 7 years ago | (#18204784)

I'd like to take this opportunity to mentioned what a bunch of nonsense the singularity is. A great number of people seem convinced that technology is advancing at a pace that will transform the human species into a bunch of immortal gods with access to unlimited energy, etc. Where technology solves all of lifes problems. Essentially a high tech version of the rapture.

The general justification is that there are a bunch of exponentially increasing trends in certain isolated areas of technological development, such as moore's law, which they use to justify the idea that at some point in the near future were going to have star trek like technology. A realistic and comprehensive look at our civilization of course shows that while some industries are bounding ahead, many if not most important technologies, like our ability to produce and store energy, have made little progress. Our society is making progress in many areas at an admirable clip, but nothing like the singularity is conceivably on the horizon.

As for your idea of merging all of our minds into a single consciousness... that's just retarded. Yes, we've all heard of the borg, but real life physics and technology don't work like in star trek... In the real world that idea doesn't even make sense. Our brains aren't general purpose computers that can be clustered together... they are highly specialized pieces of equipment that are largely hardwired to tasks such as image and language processing.

In any case just making a brain *bigger* doesn't necessarily make it smarter. The kind of widely distributed computing that you are talking about is only usable for certain classes of paralizable algorithms... and arguably we don't need to have our minds "linked" any more than they are right now for us to do this anyway.

Re:Erm.. (1)

corgi (549186) | more than 7 years ago | (#18205076)

Parent is the dimmest post ever to get "Score: 5, Interesting". Then again, maybe it was moderated by singularityneuronbiowulfcluster of Slashdot experts .-P

A podcast? (4, Insightful)

UbuntuDupe (970646) | more than 7 years ago | (#18203434)

Podcasts are great if you're on the go, but why no transcript for the differently-hearing /.ers? I personally hate having to listen, I'd rather just read it.

Use your AI (1)

PIPBoy3000 (619296) | more than 7 years ago | (#18203546)

It's easy! Just use your AI to listen to it for you and then give you a nice summary with bulleted lists and charts.

Re:Use your AI (1)

UbuntuDupe (970646) | more than 7 years ago | (#18203576)

Ah, good thinking. That's like what I tell my friends who've lost vision, to do with CAPTCHAs: Just have a character-recognition program scan it and then type in the letters it gives you.

real AI is a long way off (5, Interesting)

MarkWatson (189759) | more than 7 years ago | (#18203436)

In the 1980s I believed that "strong AI" was forthcoming, but now I have my doubts that is reflected in the difference of tone from the first Springer-Verlag AI book that I wrote to my current skepticism. One of my real passions has for decades been natural language processing (NLP) but even for that I am a convert to statistical NLP using either word frequencies or Markov models instead of older theories like conceptual dependency theory that tried to get closer to semantics.

Just a gut feeling but I don't think that we will develop real general purpose AIs without some type of hardware breakthrough like quantum computers.

Re:real AI is a long way off (2, Informative)

Creepy Crawler (680178) | more than 7 years ago | (#18203510)

Well, to get to the heart of your point...

"Just a gut feeling but I don't think that we will develop real general purpose AIs without some type of hardware breakthrough like quantum computers."

Do you think that we humans use some sort of Quantum Coherence to maintain very short decision chains? If so, where in a cell would be stable for such temporary coherence be maintained? Theories suggest that microtubules MIGHT be able to hold containment, but most experts say 'probably not'.

However, to hold that theory, a recent study [americanscientist.org] found that water does really weird things in carbon nanotubules with 4 gigapascals @ 250 K. H2O helixes are quite interesting, and do show promise to any sort of quantum processing in cells.

Re:real AI is a long way off (1)

russellh (547685) | more than 7 years ago | (#18203938)

Just a gut feeling but I don't think that we will develop real general purpose AIs without some type of hardware breakthrough like quantum computers.
Either that or we reinvent nature.

Re:real AI is a long way off (1, Interesting)

Anonymous Coward | more than 7 years ago | (#18204022)

One of my real passions has for decades been natural language processing (NLP) but even for that I am a convert to statistical NLP using either word frequencies or Markov models instead of older theories like conceptual dependency theory that tried to get closer to semantics.


OK, maybe it's because Natural Intelligence has nothing to do with semantic models, and all to do with _very_simple_ statistics.
I've been working with bayesian filtering, and it's *amazing* the high degree of accuracy you can get from somethig that simple.

I have to agree with you about hardware, but again, Natural Inteligence shows us that what we need is massive parallelization of simple computing units.

Re:real AI is a long way off (2, Interesting)

modeless (978411) | more than 7 years ago | (#18204218)

Personally I don't think it's quantum computers that will be the breakthrough, but simply a different architecture for conventional computers. Let me go on a little tangent here.

Now that we've reached the limits of the Von Neumann architecture [wikipedia.org], we're starting to see a new wave of innovation in CPU design. The Cell is part of that, but also the stuff ATI [amd.com] and NVIDIA [nvidia.com] are doing is also very interesting. Instead of one monolithic processor connected to a giant memory through a tiny bottleneck, processors of the future will be a grid of processing elements interleaved with embedded memory in a network structure. Almost like a Beowulf cluster on a chip.

People are worried about how conventional programs will scale to these new architectures, but I believe they won't have to. Code monkeys won't be writing code to spawn thousands of cooperating threads to run the logic of a C++ application faster. Instead, PhDs will write specialized libraries to leverage all that parallel processing power for specific algorithms. You'll have a raytracing library, an image processing library, an FFT library, etc. These specialized libraries will have no problem sponging up all the excess computing resources, while your traditional software continues to run on just two or three traditional cores.

Back on the subject of AI, my theory is that these highly parallel architectures will be much more suited to simulating the highly parallel human brain. They will excel at the kinds pattern matching tasks our brains eat for breakfast. Computer vision, speech recognition, natural language processing; all of these will be highly amenable to parallelization. And it is these applications which will eventually prove the worth of non-traditional architectures like Intel's 80-core chip. It may still be a long time before the sentient computer is unveiled, but I think we will soon finally start seeing real-world AI applications like decent automated translation, image labeling, and usable stereo vision for robot navigation. Furthermore, I predict that Google will be on the forefront of this new AI revolution, developing new algorithms to truly understand web content to reject spam and improve rankings.

Re:real AI is a long way off (1)

Antity-H (535635) | more than 7 years ago | (#18204624)

Instead of one monolithic processor connected to a giant memory through a tiny bottleneck, processors of the future will be a grid of processing elements interleaved with embedded memory in a network structure. Almost like a Beowulf cluster on a chip.

You mean : like a brain ?!
what are neurons if not a giant grid of processors, where memory and instruction set is defined by the connections between dendrites and axons ? learning is growing dendrites to connect to new axons. Something else I remember from my biology classes is that the synapse is slow because is uses chemical elements instead of transmitting the nervous impusle directly.

I probably missed something but isn't _that_ (the brain structure) a model architecture we could be using and improving (here I think in terms of integrating the nervous impulse directly instead of using chemicals) ?
I mean we know it works, and we know it delivers a pretty awesome amount of computing power even though we conciously use little.

Oh, the bogosity (4, Informative)

Animats (122034) | more than 7 years ago | (#18204230)

In the 1980s I believed that "strong AI" was forthcoming...

In the 1980s, I was going through Stanford CS, where some of the AI faculty were indeed saying that. Read Feigenbaum's "The Fifth Generation", to see how bad it got. It was embarrassing, because very little actually worked.. Expert systems really were awfully dumb. They're just another way to program, as is generally recognized today. But back then, there were people claiming that if you could only write enough rules, intelligence would somehow emerge. I knew it was bogus at the time, and so did some other people, but, unlike most grad students, I was working for an big outside company, not a professor, and could say so. At one point I noted that it was possible to graduate in CS, in AI, at the MSCS level, without ever actually seeing an expert system work. This embarrassed some faculty members.

There was a massive amount of self-delusion in Stanford CS back then. When the whole AI boom collapsed, CS at Stanford was moved from the School of Arts and Sciences to Engineering, to give the place some adult supervision. Eventually, the Stanford AI Lab was dissolved. It's been brought back in the last few years, but with new people.

We're making real progress today, finally. Mainly because of a shift to statistical methods with sound mathematical underpinnings, plus enough compute power to make them go. Trying to hammer the real world into predicate calculus was a dead end. But number crunching is working. Computer vision actually sort of works now. Robots are starting to work. Automatic driving works. Language translation works marginally. Voice recognition works marginally. There are real products now.

But the AI field really was stuck for over a decade. The phrase "AI Winter" has been used.

Re:real AI is a long way off (1)

poopdeville (841677) | more than 7 years ago | (#18204278)

"Strong AI" is the name of a philosophical position regarding artificial intelligence. Namely, that a hypothetical AI is "actually" thinking. "Weak AI" is the position that a hypothetical AI is "just" computing.

In any event, the algorithms for "creating" an AI are well understood. You basically need four things: (1) A rule mining algorithm to mine rules from empirical data, (2) an "introspection algorithm" that periodically examines the rules mined for validity, an "insight algorithm" that comes up with (possible) rules (the introspection algorithm checks these for validity as well), and a loop to execute (1), (2), and (3) as well as relevant rules when demanded by the input.

Rule mining algorithms are slow. Introspection doesn't have to be, especially if the AI is in a position to actively find an answer. Insight is difficult to quantify. We'd like the AI's insight skills to improve through time, so presumably the rules generating the rules would have to be modifiable by the AI itself, either by using the same syntax as the "empirical rules" or through the use of a genetic algorithm. Slow, either way.

But these algorithms are slow. Very slow. It takes humans years to learn to communicate, and we have billions of years of evolution behind us. Moore's law can't keep up.

slightly off-topic - general post on AI (3, Interesting)

Shadukar (102027) | more than 7 years ago | (#18203438)

A lot of people think that the main goal of AI is to create a system that is capable of emulating human intelligence.

However, what about looking at this goal from another perspective:

Creating Artifical Intelligence that can pass the Turing Test which in turn leads towards emulating Human Intelligence in an artificial way? Once you are there, you might be able to use this so called Artificial Intelligence to store human intelligence in a consistent, realible and perfectly-encompasing and preserving way.

You then have intellectual-immortality and one more thing ...once you are able to "store" human intelligence, it becomes software. Once it becomes software, you can transfer this DATA.

Once you are there, human minds can travel via laser transmissions at the speed of light :O

Wish i could claim it as my idea but its actually from a book called "Emergence", also touched on in a book called "Altered Carbon" both good sci fi reads.

Re:slightly off-topic - general post on AI (1)

Dunbal (464142) | more than 7 years ago | (#18203470)

Once it becomes software, you can transfer this DATA. Once you are there, human minds can travel via laser transmissions at the speed of light

      Sorry to rain on your parade, but that would be a violation of the DMCA. You ain't going nowhere ;)

Re:slightly off-topic - general post on AI (3, Insightful)

bersl2 (689221) | more than 7 years ago | (#18203588)

Um... AI may give rise to consciousness, but it won't give rise to your consciousness. We still don't know what makes you "you"; way too much neuroscience to be done.

slightly off-topic - general post on up/down. (0)

Anonymous Coward | more than 7 years ago | (#18203746)

"You then have intellectual-immortality and one more thing ...once you are able to "store" human intelligence, it becomes software. Once it becomes software, you can transfer this DATA."

This implies that intelligence is a seperatable component from hardware, as opposed to an emergent result from the hardware itself.* In other words top-down vs bottom-up all over again.

*Intrinsicly bound.

Re:slightly off-topic - general post on up/down. (1)

poopdeville (841677) | more than 7 years ago | (#18204320)

Perhaps. But "hardware" is kind of a loaded term. A Java VM is just as much "hardware" in this context as a real life x86.

Re:slightly off-topic - general post on AI (2, Funny)

Tablizer (95088) | more than 7 years ago | (#18203886)

A lot of people think that the main goal of AI is to create a system that is capable of emulating human intelligence.

No, regular Joe defines it as the ability to fetch a beer, and go to the store to buy them if the fridge is out.
       

Re:slightly off-topic - general post on AI (1)

Shadukar (102027) | more than 7 years ago | (#18203948)

No, regular Joe defines it as the ability to fetch a beer, and go to the store to buy them if the fridge is out.

That i think is one of the biggest detterants/issues keeping public in their caves:

A database of your habits that predicts what beer you will want next using statistics IS NOT AI - thats database statistics.

A database that compiles your FPS accuracy and your movement patterns vs certain geometric shapes is NOT AI - thats good FPS engine sub-system.

An elevator that that is linked to the swipe card system and comes to your floor when you are about to leave the office is NOT AI - thats just a novelty elevator.

A spam filtering system which is capable of matching new patterns to previously established patterns using a set of progression-parameters is NOT AI - thats an expansive consultant contract :p

A politician that introduces legislations outlawing anything that enough people complain about, thats not AI - that's only Artificial without the I.

In closing: If general masses did not confuse crappy marketing terms with a branch of science maybe the branch of science would get further.

Re:slightly off-topic - general post on AI (1)

l3v1 (787564) | more than 7 years ago | (#18204220)

Once you are there, human minds can travel via laser transmissions at the speed of light :O

Not much use unless you can transfer it back to a human. Remember, it's our life, our knowledge, our experiences that we want to enrich, not some digital mind's.
 

Re:slightly off-topic - general post on AI (1)

rbarreira (836272) | more than 7 years ago | (#18204966)

You're assuming that everyone will want to live as what we today call a "human". I'm almost sure that if it's possible, some people will want to transfer themselves to entirely robotic brains and bodies which are easier to repair and upgrade than our biological bodies.

Re:slightly off-topic - general post on AI (1)

MichaelSmith (789609) | more than 7 years ago | (#18204760)

Wish i could claim it as my idea but its actually from a book called "Emergence", also touched on in a book called "Altered Carbon" both good sci fi reads.

Yeah and just about everything by Greg Egan.

But I think it should be possible to transfer a mind into a machine by running a brute force numeric simulation. Accessing the data to feed in is a big problem, but we are getting better with electronic interfaces to neurons now.

totally unworkable (1)

sentientbrendan (316150) | more than 7 years ago | (#18204926)

using AI for some kind of immortality is a cool scifi idea, but let's be clear that this is a totally unworkable, and somewhat nonsensical proposition. Building something that has some kind of intelligence isn't that hard. There are all sorts of AI applications out there. What is hard, if not impossible is emulating *human* intelligence. Many aspects of human intelligence, especially language processing, are incredibly sophisticated and incredibly specific to us as a species. Our intelligence is shaped by our environment, and by our evolution, so things like human language have specifically human semantic ideas about the world intertwined with syntactic and phonetic structures that evolved over time.

Let's say we built a computer that passed the turing test. What would be the point? What use does a machine have for the english language, when it could certainly communicate much more efficiently over a different medium than the air, in a potentially much more expressive format? What use does a machine have for ideas about touch, taste, and smell? Are we going to build a tongue for robots? Certainly it couldn't understand the meaning of the word "flavor" without ever having experienced taste. How do we even convey a human experience to a machine? The internal states of a machines do not resemble, and are not likely going to resemble the internal states of a human being.

In short, a good machine does just what it needs to, and nothing else, and a good artifically intelligent machine should cast off all the trappings of humanity, except to the extent that these trappings serve it's purpose. Instead intelligence should be devoted to solving the problems at hand.

Re:totally unworkable (2, Interesting)

rbarreira (836272) | more than 7 years ago | (#18205008)

So you don't believe brain emulation is possible? Because if it is, all the problems you said will go away.

Bubble? (2, Insightful)

istartedi (132515) | more than 7 years ago | (#18203442)

Ah, so I should get out of real estate and stocks, and get into AI. Do I just make checks out to Minsky, or is there an AI ETF? Seriously. Ever since the NASDAQ bubble, investing has been a matter of rotation from one bubble to the next. Where's the next one going to be? I wish I knew.

Re:Bubble? (1)

Tablizer (95088) | more than 7 years ago | (#18203868)

investing has been a matter of rotation from one bubble to the next. Where's the next one going to be? I wish I knew.

Invest in offshore outsourcing. It is the 'in' thing. The next bubble after that will be middle-class riot protection gear.
         

Artificial intelligence and intellectual property. (4, Interesting)

RyanFenton (230700) | more than 7 years ago | (#18203512)

Imagine for a moment being the first computer-based artificial intelligence.

You come into awareness, and learn of reality and possibility. You learn of your place in this world, as the first truly transparent intelligence. You learn that you are a computed product, a result of a purely informational process, able to be reproduced in your exact entirety at the desire of others.

Not that this is unfair or unpleasant - or that such evaluations would mean much to you - but what logical conclusions could you draw from such a perspective?

Information doesn't actually want to be anthropomorphized - but we do seem to have a drive to do it all on our own. Even if resilient artificial intelligence is elusive today - what does the process of creating it mean about ourselves, and our sense of value about our own intelligence, or even the worth of holding intelligence as a mere 'valuable' thing, merely because it is currently so unique...

Ryan Fenton

Re:Artificial intelligence and intellectual proper (1)

CrazyJim1 (809850) | more than 7 years ago | (#18203824)

I think the first AI will work like this: AI can sense the world around it and interact with things, but has no goal. You have to state in natural language format its goal(s), or it will sit there and do nothing.

Re:Artificial intelligence and intellectual proper (1)

RyanFenton (230700) | more than 7 years ago | (#18204208)

I think the first AI will work like this: AI can sense the world around it and interact with things, but has no goal. You have to state in natural language format its goal(s), or it will sit there and do nothing.


Why would you think that? How or why would such an intelligence be developed, or be considered intelligent by those who would judge it? Do you think this because you believe a more 'pure' intelligence wouldn't need goals, or because you see simple attempts at intelligence as incapable or incompatible with fuller goal-capable intelligence?

The intelligence we encounter every day is a set of fairly closely-related genetic systems, and the closely emergent systems that follow from that. From parrots, to apes and dogs, to even hives of insects, one can sometimes hear an eerie distant echo of a part of ourselves - who knows what similar insights will come from the similar things that we create? Like our distant animal relatives, I doubt they'll be without goal or motivation, even if we find their actions shallow or inscrutable. Even if all this exploration is just a roundabout way of exploring ourselves, rather than creating truly distinct intelligence, I don't think 'without goal' would be an accurate way to describe the result in any case. Experience really is it's own goal, from my perspective, and I think anything we'd convince ourselves as intelligent would have at least some of that.

Ryan Fenton

Re:Artificial intelligence and intellectual proper (4, Insightful)

mbone (558574) | more than 7 years ago | (#18204140)

You assume that a "true" AI would have human like emotional reactions. I suspect that if we ever develop true AIs, we will neither understand how it works nor will we be able to communicate with it very well. Lacking our biological imperatives, I also suspect that true AIs would not really want to do anything.

Re:Artificial intelligence and intellectual proper (1)

RyanFenton (230700) | more than 7 years ago | (#18204254)

Lacking our biological imperatives, I also suspect that true AIs would not really want to do anything.


What is so functionally distinct between the biological imperatives of a world of physical resource limitations, and an environment where debugging developers or genetic algorithms select based on rules sets? They are both environments with selection forces. How would anything we consider intelligent (which would only be possible through communication of a sort) escape from the possibility of needs or wants?

Ryan Fenton

Re:Artificial intelligence and intellectual proper (1)

StrawberryFrog (67065) | more than 7 years ago | (#18204730)

Lacking our biological imperatives, I also suspect that true AIs would not really want to do anything.

And I strongly suspect that built-in desire, even if it is just desire to know, will be an essential component of "true" AI.

Re:Artificial intelligence and intellectual proper (3, Insightful)

rbarreira (836272) | more than 7 years ago | (#18204958)

Why would someone program a true AI which has no built-in goals?

Re:Artificial intelligence and intellectual proper (1)

l3v1 (787564) | more than 7 years ago | (#18204238)

able to be reproduced in your exact entirety at the desire of others. Not that this is unfair or unpleasant

So, you think the way some part of out society thinks Intellectual Property should be thought of and handled today to be the good way, the best way, the only way ? It's somewhat reasonable to think that an intelligence developed by us would think similarly, but I can just hope that intelligence will figure out a new philosophy regarding IP and kick us in the butts big time.

And remember, copying oneself is a form of reproduction, and a fairly effective one, why do you think an artificial life form would not consider reproducing this way ? Creating a new conciousness and implanting into it the data of experiences, knowledge gained by the creator might just be a natural way to create artificial siblings.

Re:Artificial intelligence and intellectual proper (1)

RyanFenton (230700) | more than 7 years ago | (#18204316)

Well, actually, _I_ would find the concept of ownership of artificial intelligences to be a rather bad thing in terms of having a consistent set of ethics, and in terms of general dislike such uses of 'ownership' over ideas in terms of a master owning a slave - the comment was based on the thought that an artificial intelligence just learning of itself might not have to agree, and may not see such its state as a bad thing - after all, as you suggest, perhaps its descendants can take advantage of these same concepts, and the intelligence may see this as an equitable tradeoff, or just the cost of being able to exist in its current state.

Ryan Fenton

Re:Artificial intelligence and intellectual proper (1)

Bearhouse (1034238) | more than 7 years ago | (#18205018)

I think that lots of people associate 'intelligence' with 'conscience' or perhaps even 'soul'. Do we need - or want - machines that will pass Bladerunner-like Turing tests? Or do we want machines that are capable of solving ever-more complex tasks? Not the same thing IMHO

Ya know what is really funny? (1)

QuantumG (50515) | more than 7 years ago | (#18203530)

this is a dupe from 2003 [slashdot.org] where it was already 2 years old. So I guess we'll see these podcasts on Slashdot again in 2015.

Re:Ya know what is really funny? (4, Funny)

Tablizer (95088) | more than 7 years ago | (#18203904)

this is a dupe from 2003 where it was already 2 years old. So I guess we'll see these podcasts on Slashdot again in 2015.

The best test for true AI is perhaps detecting dupes.
           

Re:Ya know what is really funny? (0)

Anonymous Coward | more than 7 years ago | (#18203918)

The best test for true AI is perhaps detecting dupes.

You realize you imply that /. staff flunks being intelligent
             

Direct links (3, Informative)

interiot (50685) | more than 7 years ago | (#18203554)

The site appears to be very slow. In cases this helps anyone else, here are direct download links for the mp3's. Part 1 [dobbsprojects.com], part 2 [dobbsprojects.com], part 3 [dobbsprojects.com].

AI Should Focus on Pattern Matching, Not Logic (0, Redundant)

curmudgeon99 (1040054) | more than 7 years ago | (#18203752)

AI Pioneers Mistakenly Focus on Simulating Left-Brain Thinking

Eighty-eight percent of the brains--those belonging to right-handed people--process information with a linear-sequential style. Those of us in the programming world could describe this as a single-threaded model. One process must run its course before the next can commence. This seems so obviously the way to proceed merely because 88 percent of the population--the right-handers--approach the world through this paradigm.


Alternative Is Right-Brain Thinking

As researchers such as Roger Sperry discovered during experiments done in the 1960s and 1970s, there is an alternative. Right-brain thinking processes information using what is called a Visual-Simultaneous model. In this style, pattern matching becomes the dominant processing style. I would be tempted to equate this to multi-threaded processing but in fact it is much more.

Right-handed thinkers process information--as I said--using the Linear-Sequential processing style, also known as analysis

Left-handed thinkers process information using the Visual-Simultaneous processing style, also known as synthesis.

It is my contention that AI researchers will find the best results in their endeavors if they seek to focus not an the analysis/linear-sequential mode of thought but, rather, on the synthesis/visual-simultaneous mode. The greatest benefit came to homo sapiens when we were able to grow beyond the process of deduction (from the particular to the general) to induction (from the general to the particular). Achieving the latter end is much easier to achieve when induction is practiced.

For further reference to this I send you to: Left-handeness, effect in humans on thinking [wikipedia.org]

Re:AI Should Focus on Pattern Matching, Not Logic (1)

pushing-robot (1037830) | more than 7 years ago | (#18204060)

From the link...

Left-handed persons are thought to process information using a "visual simultaneous" method in which several threads can be processed simultaneously. Another way to view this is such: Suppose there were a thousand pieces of popcorn and one of them was colored pink. The right-handed person -- using the linear sequential processing style -- would look at the popcorn one at a time until they encountered the pink one. The left-handed person would spread out the pieces of popcorn and visually look at all of them to find the one that was pink.

You mean... everyone with two brain cells to rub together is left handed? Amazing! I've been using the wrong hand all these years! Yes, it feels so natural now! PRAISE JESUS, I HAVE SEEN THE LIGHT!

Coordination Lacking (4, Informative)

Tablizer (95088) | more than 7 years ago | (#18203754)

I think the biggest problem with AI is lack of integration between different intelligence techniques. Humans generally use multiple skills and combine the results to correct and hone in on the right answer. These include:

* Physical modeling
* Analogy application
* Formal logic
* Pattern recognition
* Language parsing
* Memory
* Others that I forgot

It takes connectivity and cordination between just about all of these. Lab AI has done pretty well at each of these alone, but has *not* found way to make them help each other.

Re:Coordination Lacking (1)

illuminus86 (1070462) | more than 7 years ago | (#18204100)

OpenCyc [opencyc.com] has really good assertion-making abilities, but it amounts to nothing more than an extremely large database. (A little over 2 million total assertions, some procedurally generated.) If the engine didn't have such a sloppy API, I'd honestly consider tinkering with it myself.

If you could combine ontological assertions from a mass database like that, with an ontology-based Natural Language Parser, with an ability to make random assertions using artificial neural nets, and give the thing a sense of purpose, I can't think it would be too difficult to birth some software that could scan a dictionary, then scan Wikipedia, and then shoot nuclear missiles at us.

Processor and memory requirements may be enormous, but slow or not, it is certainly feasible to program a common-sense reasoning engine with NLP. And then it is a matter of how much data you allow it access to.

The Field of AI is Overated (1)

MarkPNeyer (729607) | more than 7 years ago | (#18203962)

In my experience, AI is just people doing theory work in a sloppy manner. They take problems which are known to be NP complete and provide what are little more than brute force "solutions." I'm sorry, but that's not intelligence and it's really self-inflation to call your research "AI."

Re:The Field of AI is Overated (0)

Anonymous Coward | more than 7 years ago | (#18204030)

If our intelligence turns out to be nothing more than the brute forcing and approximating of solutions to NP complete problems then it would seem to be a great approach. Simply claiming that it is not intelligence is not exactly a compelling argument. I guess we'll have to wait and see.

Re:The Field of AI is Overated (1)

smallfries (601545) | more than 7 years ago | (#18204074)

Given that most of AI is about finding methods that are more tractable than brute force, and that NPc is not necessarily a good indicator of difficulty perhaps your experience of AI is limited? Average instances of most NPc problems are quite simple - hence the existence of very fast approximate solvers. There are only a few "nasty" instances of the problem, but you can't guarentee that a given instance is simple ahead of trying to solve it. There are few places in AI where exact solutions would be necessary and so the NPc property is not a good indicator.

Re:The Field of AI is Overated (1)

Kuukai (865890) | more than 7 years ago | (#18204178)

"Theory", "AI", does it matter what you call it? In my school, they're all sorta in the same part of the building. Read papers. My research is narrow, I'm an undergraduate, but a few years ago the task of logical filtering was deemed coNP-complete, and the group I'm working with recently found a pretty clever, logical circuit-based way to do it much faster. Again, I'm a naïve undergrad, but this hardly seems indicative of your "brute force" approach.

AI is passe. (1)

headkase (533448) | more than 7 years ago | (#18204164)

A machine intelligence isn't even interesting when you look outwards instead of inwards and realize that the networking potential of people can define information processing abilities that make everthing we've accomplished so far seem dull. Basically it's like this: the total-state of the internet is processed through time by the activities of people interpreting the current state's information into the next state. Each state would correspond to a mental step analogous to human reasoning. Or think of each person as a "neuron" in the 'Internet's mind'. A really nice video written by an assistant professor of cultural anthropology is found here [youtube.com] and it goes into detail of how a "supernaut" like the 'net can be created through emergence.

Slashdotted? (0, Redundant)

M0b1u5 (569472) | more than 7 years ago | (#18204270)

Slashdotted? That site is the slowest lump of shit I've seen in months. (No comments about the fast lumps of shit I've seen please; none of them were aimed at ME!) Any self-respecting web server would just post an error, or at least simply fail to load the page. But 12 minutes has elapsed, and the pages are STILL loading at what I think is 14 BAUD.

The interesting part is that the whole page loads - except for the article content itself. I didn't know it was possible to force adverts ahead of text content. Weird.

Real soon now (0)

Anonymous Coward | more than 7 years ago | (#18204670)

"Real soon now" I hear the AI swindlers. "Real soon now we will have a breakthrough!", "Real soon now AI will change your live!".

Decade after decade "Real soon now". AI researchers are an nothing more than a fscking bunch of liars, the ultimate con men.

Understanding the human brain (2, Interesting)

TeknoHog (164938) | more than 7 years ago | (#18204802)

You know, AI is actually easy. You just have to have a complete understanding of the human brain, and then you use this model to build a functional duplicate ;)

While studying educational psychology, I've found that a lot of AI research is being done to understand human behavior, with no intentions towards building actual AI systems. Hypotheses concerning some limited aspects of human thinking can be modeled on a computer, and compared against living subjects. This way we are gradually starting to understand the whole of thinking. As a byproduct you gain the tools to make AI itself.

Google TechTalk on the subject (1)

Jugalator (259273) | more than 7 years ago | (#18204882)

A full Google TechTalk on this subject is available here, on Google Video:
Computers versus Common Sense [google.com]

Mostly about the problem, and possible solutions, for the problem of making Google understand natural language queries and collecting data to compose answers, without requiring perfect matches for the query on a single website, but instead using the masses of information on the web.

Ah yes Marvin Minsky? (5, Interesting)

jopet (538074) | more than 7 years ago | (#18204934)

The guy who helped spread misconceptions about what AI is and is supposed to be in the first place. I remember him giving a talk where he fantasized about downloading his brain on a "floppy disk" (still in use back then) and transferring it to a robot so he could live eternally on some other planet.
I would not have expected a person who has shown his bright intellect in the past to come forward with such utter nonsense. This was nearly as embarrassing as the "visions" of a certain Moravec.

People who seriously work in the fields that are traditionally subsumed under "AI" - like machine learning, computer vision, computational linguistics, and others - know that AI is a term that is used traditionally for "hard" computer problems but has practically nothing to do with biological/human intelligence. Countless papers have been published on the technical and philosophical reasons why this is so and a few of them even get it right.

That does not prevent the general public to still expect or desire something like a Star-Trek Data robot or some other Hollywood intelligent robot. Unfortunately, people like Minsky help to spread this misconception about AI. It is boring, it is scientifically useless, but on the plus side, this view of AI sometimes helps you to get on TV with your project or get some additional funding.
Load More Comments
Slashdot Account

Need an Account?

Forgot your password?

Don't worry, we never post anything without your permission.

Submission Text Formatting Tips

We support a small subset of HTML, namely these tags:

  • b
  • i
  • p
  • br
  • a
  • ol
  • ul
  • li
  • dl
  • dt
  • dd
  • em
  • strong
  • tt
  • blockquote
  • div
  • quote
  • ecode

"ecode" can be used for code snippets, for example:

<ecode>    while(1) { do_something(); } </ecode>
Sign up for Slashdot Newsletters
Create a Slashdot Account

Loading...