Beta

×

Welcome to the Slashdot Beta site -- learn more here. Use the link in the footer or click here to return to the Classic version of Slashdot.

Thank you!

Before you choose to head back to the Classic look of the site, we'd appreciate it if you share your thoughts on the Beta; your feedback is what drives our ongoing development.

Beta is different and we value you taking the time to try it out. Please take a look at the changes we've made in Beta and  learn more about it. Thanks for reading, and for making the site better!

Recent Advances in Cognitive Systems

timothy posted more than 11 years ago | from the smarter-than-me dept.

Science 85

Roland Piquepaille writes "ERCIM News is a quarterly publication from the European Research Consortium for Informatics and Mathematics. The April 2003 issue is dedicated to cognitive systems. It contains no less than 21 articles which are all available online. In this column, you'll find a summary of the introduction and what are the possible applications of these cognitive systems. There's also a picture of the cover, a little robot with a very nice looking blue wig. And in A Gallery of Cognitive Systems, you'll find a selection of stories, including links, abstracts and illustrations (the whole page weighs 217 KB). There are very good pictures of autonomous soccer robots, swarm bots, cognitive vision systems, and more."

cancel ×

85 comments

Sorry! There are no comments related to the filter you selected.

Quake in Georgia. (-1)

varak_mathews (592911) | more than 11 years ago | (#5832607)

You can only post nested lists and blockquotes 3 levels deep. Please fix your UL, OL, and BLOCKQUOTE tags

Re:Quake in Georgia. (-1, Offtopic)

Anonymous Coward | more than 11 years ago | (#5832704)

So, what kind of movies are people watching these days? I just got done watching Heist and thought it was pretty good, with all sorts of things to catch you by surprise. I've been thinking about catching The Transporter next.

cognitive systems = teh VARY ghey (-1, Redundant)

Anonymous Coward | more than 11 years ago | (#5832614)

fp

penis (-1, Redundant)

Anonymous Coward | more than 11 years ago | (#5832617)

first very bored working on a useless phb brainfart
project penis post.

Thx.

I don't understand (-1, Offtopic)

Anonymous Coward | more than 11 years ago | (#5832620)

WTF is kognativ?

Re:I don't understand (-1)

count_sporkula (446625) | more than 11 years ago | (#5832630)

spelt wrong.

Cognitive Science (1, Flamebait)

NicotineAtNight (668197) | more than 11 years ago | (#5832643)

Noun ; 1. The current scientist scam, which has replaced the older artificial intelligence scam with its more robust resistance to criticism and even more byzantine theories.

Re:Cognitive Science (1)

black mariah (654971) | more than 11 years ago | (#5832667)

Yeah, it's jsut another 419 scam like the Theory of Relativity. Those damn Nigerians will say anything to screw someone out of millions of dollars.

*It's a joke. Laugh.*

Re:Cognitive Science (4, Insightful)

Max Romantschuk (132276) | more than 11 years ago | (#5832682)

Noun ; 1. The current scientist scam, which has replaced the older artificial intelligence scam with its more robust resistance to criticism and even more byzantine theories.

Actually, cognitive science does not replace AI. The goal of cognitive science is to figure out how our brain works on a functional level. Where neurology studies the actual chemical reactions and neural activity, cognitive science studies how the "hardware" works to achieve our thought processes.

One good example is how the brain works out an image of the mismash of neural impulses going through the retinal nerves. The resolution of the eye is actually quite low, and the "pixels" aren't ordered in any linear fashion. The brain does an enormous amount of processing to form an actual image. This is why babies can't see, even though the optics work. The brain needs to develop the processing algorithms in order to make sense of all the information coming in.

Of course, all of this is theory, and subject to scientific dispute :)

Not all cognitive scientists do that. (5, Interesting)

fireboy1919 (257783) | more than 11 years ago | (#5832716)

Actually, I'd say that not very many are doing that.

The goal of all the cognitive scientists I've met is to make machines think, just as with A.I. In fact, I've always heard, and was told in my AI class, that A.I. is a branch of cognitive science.

However, there are many approaches to machine thinking that are not considered part of A.I.:
neural networks, SVMs, computer vision (signal interpretation), modeling.

So what does A.I. cover then? Well, it's not exactly well defined. If you read A.I. textbooks, you'll find the full of lots of different things. Some would go so far as to even include those things I mentioned that aren't normally considered part of A.I. However, in general, I would say that A.I. is the field that is concerned with
1) Solving the search problem (searching for a solution in a large set of possibilities)
2) Doing it with heuristics.

I'd like to take a moment to note that a famous computer vision paper came out in the 80's that documented a method called Marr-Hildreth, which was for finding edges in images. They created it by using the same technique that eyes use (laplacian of a Gaussian for edge detection - they studied cats to find this out).

A few years later someone improved upon it by throwing out the model completely and NOT doing it the way that people do (Canny).

Cognitive scientists are usually more concerned with getting the machines to do what we want than they are with modeling human thinking techniques.

Re:Not all cognitive scientists do that. (3, Insightful)

willis (84779) | more than 11 years ago | (#5832766)

Cognitive scientists are usually more concerned with getting the machines to do what we want than they are with modeling human thinking techniques.
I think the answer is somewhere in the middle. My experience with cogsci is that it's really about understanding thought, not about making machines. I think it really depends on where you are. If you're at MIT, it's probably machines. If you're at Berkeley, it's probably thought (at least for me, I took a class on cognitive metaphor, and we had lots in that direction. I think Santa Barbara is also more brain-focused (so-called "west-coast school")).

Re:Not all cognitive scientists do that. (1)

Svobodin (668681) | more than 11 years ago | (#5834070)

Cognitive metaphor at Berkeley? With Jim Martin? I need to talk to this guy. My master's thesis is drawing heavily on his work. Guess I should get off my lazy ass and send him an email, eh? Anyway, I'd say the goals of most cognitive scientists are focused on machine models of human intelligence -- not to build a smarter machine per se, though that is often a serendipitous byproduct, but rather to further our understanding of the computational powers of the mind.

Re:Not all cognitive scientists do that. (1)

willis (84779) | more than 11 years ago | (#5838469)

A few years back, I took a class with George Lakoff... at the time, a close friend of mine was a cog sci major, and I hung about quite a bit.

His metaphor stuff is really good -- when you get it, it's like a new way of looking at all of the old language and thinking you've ever used before... (I'm sure you know what I mean).

Good luck with you work!

Re:Not all cognitive scientists do that. (1)

Svobodin (668681) | more than 11 years ago | (#5838597)

Thanks!

Yeah, Lakoff does some very interesting work with metaphor. Here's something recent [dpingles.ugr.es] .

It was his research that contributed a lot to the idea that since we (humans) process metaphor and figurative language at the same rate we do literal, non-figurative language, computers should do the same. Big implications for NLP...

Re:Not all cognitive scientists do that. (3, Informative)

StrawberryFrog (67065) | more than 11 years ago | (#5832806)

The goal of all the cognitive scientists I've met is to make machines think, just as with A.I.

I have never met any cognitive scientists, but I've read books on the subject by Danniel Dennet (who is arguably a philosopher not a scientist) and Steven Pinker (a cognitive scientist). The works of both of them are highly recommended.

Anyway, niether of them are focused on making machines think, but rather on understanding what makes humans think.

Re:Not all cognitive scientists do that. (1)

Scarblac (122480) | more than 11 years ago | (#5832810)

The goal of all the cognitive scientists I've met is to make machines think, just as with A.I. In fact, I've always heard, and was told in my AI class, that A.I. is a branch of cognitive science.

The goal of cognitive science is to find out how humans think, especially how they process information and reach decisions.

AI as a branch of cognitive science tries to model human thought with computers, basically testing theories on how the human brain achieves the things it does, to achieve this.

AI in general tries to make computer intelligent (whatever that means).

Re:Not all cognitive scientists do that. (5, Insightful)

r (13067) | more than 11 years ago | (#5832814)

trying to define AI is always problematic. very much like trying to define philosophy. :)

the classic sense of AI might have been that of search and planning. but for the last 20 years or so, many non-search and non-symbolic approaches have been treated as equals in the discipline, including:
  • behavior-based robotics
  • affective computing
  • software agents
  • ...and of course particular techniques like neural networks, bayes nets, markov model approximations, etc.
castelfranchi's introductory article in that issue actually mentions the various schisms against classic AI, which have come to be successfully reconciled with and included in the discipline.

but your're absolutely right, cog sci is more concerned with mimicking human cognitive processes. which is why AI cannot simply be a branch of it. :)

Re:Not all cognitive scientists do that. (1)

Fesh (112953) | more than 11 years ago | (#5834281)

Re your sig:

Lisp: Getting there is half defun.

Re: Not all cognitive scientists do that. (3, Insightful)

Black Parrot (19622) | more than 11 years ago | (#5832986)


> The goal of all the cognitive scientists I've met is to make machines think, just as with A.I.

You need to meet more then. Ask linguists whether they're studying cog sci and they'll give you an emphatic "yes". I think these days most research psychologists would say so as well (though maybe clinical psychologists wouldn't).

> In fact, I've always heard, and was told in my AI class, that A.I. is a branch of cognitive science.

Some AI is, but not all. It really depends on the individual researcher's goals.

> However, there are many approaches to machine thinking that are not considered part of A.I.:
neural networks, SVMs, computer vision (signal interpretation), modeling.


Never heard of SVMs, but most AI researchers do think neural networks, computer vision, and certain kinds of modelling are subfields of AI.

Who taught your AI class?

> Cognitive scientists are usually more concerned with getting the machines to do what we want than they are with modeling human thinking techniques.

No, you have that backwards. AI researchers are concerned with getting machines to behave intelligently, and cog sci researchers are trying to understand human or animal cognition. And there is a fair amount of overlap, e.g. an AI/CogSci researcher may try to get a machine to behave intelligently as a model of human cognition.

Re:Cognitive Science (2, Informative)

percepto (652270) | more than 11 years ago | (#5834042)

Where neurology studies the actual chemical reactions and neural activity, cognitive science studies how the "hardware" works to achieve our thought processes.

You *almost* got it. Cog Sci approaches the mind as an information processing device and seeks to understand the algorithms (mental representations and processes) operating on the incoming data. Thus, Cog Sci is the study of the mind as software not "hardware".

This is why babies can't see, even though the optics work.

Actually, newborn babies can do more sophisticated visual processing than you might think. In the first day of life, they have a preference for looking at faces over other stimuli. Plus, if you put two TV screens up with people talking on both and a speaker in the middle that's playing a soundtrack of one of the people but not the other, babies prefer to look at the TV screen that matches the sound. Thus, babies are wired to perform some fairly sophisticated cross-modal perceptual processing from the beginning.

Not to say that babies can see THAT well-- the mylenation of neurons (kinda like insulation on an electrical wire) in the brain isn't finished until years after birth, which limits the conductivity of neural signals and therefore the babies' perceptual and motor repertoire.

The perceptual system comes pre-wired for some basic things, and then self-organizes the rest based on the statistics of visual input from natural scenes. For instance, they've raised kittens in environments with nothing but vertical stripes, and after a while, they lose the ability to perceive horizontal stripes. (Sick experiments, but informative.)

Here, kitty kitty...

----------

Hey, buddy-- Can you spare a sig?

Re:Cognitive Science (1)

illaqueate (416118) | more than 11 years ago | (#5838257)

I think the hardware/software division is an entirely artificial construct. It came out of computer science because of the idea of multiple realizability. For example, an hp calculator and your computer's calculator may have the same superficial output and input but do entirely different computations or physical transformations. So according the the theory, one doesn't have to look at the lower levels, therefore non-reductionism is true. QED.

But what a physical brain "does" determines what "computation" or "information processing" happens, so the software/hardware dichotomy applied to cognitive science is silly. That something is a function at one level, does not imply that something won't be a function at a lower level. It's a bad metaphor.

Re:Cognitive Science (0)

Anonymous Coward | more than 11 years ago | (#5832764)

AI is a modern day scientific fantasy like the alchemists trying to transmute lead into gold.

Ain't gonna happen.

But hey if you can get some juicy grants out of it from people who read too many Sci-Fi novels, why not?

Re:Cognitive Science (1)

Black Parrot (19622) | more than 11 years ago | (#5832935)


> Noun ; 1. The current scientist scam

You don't think cognition is a legitimate subject for scientists to study?

> which has replaced the older artificial intelligence scam

Not the same thing at all; AI will still be around, plodding along, though they may eventually get a boost from the results of cognitive scientists.

> with its more robust resistance to criticism

How so?

> and even more byzantine theories.

Sorry, but the theories have to go wherever the facts lead. General relativity and quantum mechanics aren't exacty obvious or intuitive, but that's where the investigations took us. We should expect the study of cognition to lead to some counterintuitive surprises as well.

BTW, the Byzantines maintained a high culture throughout the period the rest of Europe was practicing barbarism.

Re:Cognitive Science (0)

Anonymous Coward | more than 11 years ago | (#5841561)


BTW, the Byzantines maintained a high culture throughout the period the rest of Europe was practicing barbarism.


I think you'll find the Irish were also relatively civilised (if rather religious) around then, preserving a hell of a lot of written literature while europe was busily going to the dogs.

Re:Cognitive Science (2, Interesting)

barryfandango (627554) | more than 11 years ago | (#5833461)

Reading this reminds me of my cognitive neuroscience/AI prof Lev Goldfarb. He began our course by telling us that very, very little has been accomplished in the fields of Cog Sci and AI, and that he is possibly the only one who has brought a real contribution to the table: a formal language ("real science") for working in this field. His "Evolving Transformation System" or ETS provides methods for measuring symbols and the differences between them, and lays the groundwork for modelling cognitive processes.

Compare this to any of the fake sciences, which can easily be itentified because they have the word "science" in them. Social Science, Cognitive Science, and so on, which talk about phenomenon but fail to create formalisms to describe them (like physics does for physical phenomena, for example.)

He's eccentric, but is he right? I don't know. You can read a summary of his work here [cs.unb.ca] . I never dived into this field enough to learn whether he was a revolutionary or just a big talker. I'd be interested to hear what other slashdotters have to say.

Re:Cognitive Science (0)

Anonymous Coward | more than 11 years ago | (#5837816)

I think if knowledge were like "hill climbing", he'd be stuck in a local maxima -- his degree in "Systems Design Engineering". There are plenty of people working on computational models. I don't know of any models that are precise enough to account for work done by Eric Kandel on Aplysia Californica that 'memory storage depends on the coordinated expression of specific genes for proteins that actually alter the structural elements in the brain' (and protein folding and gene expression are sufficiently complex that it wont be any time soon), but then, I am not a neuroscientist. I read a monograph titled "Gateways to Memory" that clearly isn't precise enough but I don't think it's a function of the formalism used; There are probably many models that work, depending on how precise you need to be. ETS may be a good formalism, but that depends on whether anyone finds it useful for modelling. I read his article on "structural representation" and it is simultaneously vague and overformal, in my opinion. Most of the modelling I have encountered is quick and dirty; but that's what he thinks is wrong; it's vague because he uses chomsky grammar to illustrate his model - so despite all the talk of structural representation, he's only capturing the "structure" of a classical linguistic theory.

but I'm only a computer science student..

DON'T LISTEN TO PARENT. HE FUCKS TRAFFIC CONES. (-1, Troll)

Anonymous Coward | more than 11 years ago | (#5832657)

WTF?!? IS HE SCOTTISH? (-1, Troll)

Anonymous Coward | more than 11 years ago | (#5832700)

damn scotts.

next step (-1, Troll)

Anonymous Coward | more than 11 years ago | (#5832647)

next they can cognate this here cock!

>>>YET ANOTHER DUPE!!!<<< (-1, Offtopic)

Anonymous Coward | more than 11 years ago | (#5832650)

*_s_l_a_s_d_o_t_s_u_c_k_s_*_s_l_a_s_h_d_o_t_s_u_x
s_/_____\____ADVERTISE\___ON_______/__P_\_______ s
l|___I___|__SLASHDOT___\LOW RATES_|___U__|______l
a|__LOVE_`.__Call_1-800-BLOWTACO__|____D__:_____a
s`___M____|_____________|________\|_____G_|_____s
h_\__I____|_/_______/__\\\___--___\\_____E_:____h
d__\__C___\/____--~~__________~--__|_\___&_|__ __d
o___\__H___\_-~____________________~-_\__T_|____o
t____\__A____\_________.--------.______\|A_|____t
s______\__E__\______//_________(_(__C__\_C_|____s
u_______\__L.__C____)_________(_(___C___|O_/____u
c_______/\_|___C_____)/_ DUPE_\_(____C___|_/___c
k______/_/\|___C_____)|_ SEX! |__(___C___/__\___k
s_____|___(____C_____)\__ __/__//__C_/_____\___s
*_____|____\__C_____\\_________//_(__/_______|__*
s____|_\____\____)___`----___--'_____________|__s
l____|__\______________\_______/____________/_|_l
a___|______R_______/____|_____|__\____________|_a
s___|___F__E______|____/___/.__\__\____F__G___|_s
d___|___U__A___/_/____|__SERVER_|__\____U_R____|d
o___|__C___L__/_/______\__/\___/____|___C__A___|o
t__|___K__N__/_/________|____|_______|__k__M___|t
s__|______E___|_________|____|_______|_____M___|s
u__|______W__|__________|____|_______|_____A___|u
x__|______S__|__________|____|_______|_____R___|x
*_s_l_a_s_d_o_t_s_u_c_k_s_*_s_l_a_s_h_d_o_t_s_u_x


ImportantStuff:Pleasetrytokeeppostsontopic. Trytoreplytootherpeople'scommentsinsteadofstarting newthreads.Readotherpeople'smessagesbeforeposting yourowntoavoidsimplyduplicatingwhathasalready beensaid.Useaclearsubjectthatdescribeswhatyour messageisabout.Offtopic,Inflammatory,Inappropriate , Illegal,orOffensivecommentsmightbemoderated.(You canreadeverything,evenmoderatedposts,byadjusting yourthresholdontheUserPreferencesPage)Ifyou wantrepliestoyourcommentssenttoyou,consider logginginorcreatinganaccount.

ImportantStuff:Pleasetrytokeeppostsontopic. Trytoreplytootherpeople'scommentsinsteadofstarting newthreads.Readotherpeople'smessagesbeforeposting yourowntoavoidsimplyduplicatingwhathasalready beensaid.Useaclearsubjectthatdescribeswhatyour messageisabout.Offtopic,Inflammatory,Inappropriate , Illegal,orOffensivecommentsmightbemoderated.(You canreadeverything,evenmoderatedposts,byadjusting yourthresholdontheUserPreferencesPag
ImportantStuff:Pleasetrytokeeppostsontopic. Trytoreplytootherpeople'scommentsinsteadofstarting newthreads.Readotherpeople'smessagesbeforeposting yourowntoavoidsimplyduplicatingwhathasalready beensaid.Useaclearsubjectthatdescribeswhatyour messageisabout.Offtopic,Inflammatory,Inappropriate , Illegal,orOffensivecommentsmightbemoderated.(You canreadeverything,evenmoderatedposts,byadjusting yourthresholdontheUserPreferencesPag
ImportantStuff:Pleasetrytokeeppostsontopic. Trytoreplytootherpeople'scommentsinsteadofstarting newthreads.Readotherpeople'smessagesbeforeposting yourowntoavoidsimplyduplicatingwhathasalready be

Robots (1)

rf0 (159958) | more than 11 years ago | (#5832655)

Its nice to see robotoics coming along and finally making good on all the sci-fi book

Rus

Re:Robots (0)

Anonymous Coward | more than 11 years ago | (#5832677)

I want one like Romie in the Andromeda series :)

Fatwork

My vision (1)

Max Romantschuk (132276) | more than 11 years ago | (#5832659)

When I'm 85, I'm going to have a nice little holographic butler keeping track of my appointments and stuff like that... Never tiring, never forgetting.

When you're that old I think it's your right to be lazy... right? ;)

Re:My vision (0, Funny)

Anonymous Coward | more than 11 years ago | (#5832808)

When you're 85, you can imagine you have a nice little holographic butler and nobody will really pay any attention to you...

doh! (0)

Anonymous Coward | more than 11 years ago | (#5832662)

The Chiefs could have picked any of those drones instead of Larry Johnson.

Re:doh! (0)

Anonymous Coward | more than 11 years ago | (#5832708)

I moved to Kansas Shitty about 3 years ago.

Of all the shitty things about this city, the worst has to be the Chief fanaticism. Do you really think the Chief's players give a flying batfuck about KC? No, they fucking live in CA. And the crying over Derrick Thomas dying in that car accident made we want to fucking vomit. That fuckhead deserved to die - he was speeding in the rain with no seatbelt. He thought because he made millions for playing a child's game that he was invincible - he was wrong.

And if I get cut off in traffic by one more motherfucking hick in a red pickup with Chief's arrows in his back window, I'm gonna kill someone.

Oh yeah, and the fucking economy blows here too.

Re:doh! (1)

sigep_ohio (115364) | more than 11 years ago | (#5834206)

Just a little reality check, but fanaticism is prevalent in nearly every major sports town. KC isn't any different than Denver or Milwackey or Green Bay or Jacksonville.

European Science (-1, Troll)

Epeeist (2682) | more than 11 years ago | (#5832666)

You know you shouldn't be looking at this. I mean if it comes from Europe it must be:
  1. Anti-American
  2. Socialist

Especially if it comes from France

Re:European Science (0)

Anonymous Coward | more than 11 years ago | (#5832685)

*sigh*

For the humour impaired (1)

Epeeist (2682) | more than 11 years ago | (#5833390)

I realise I missed the smiley off the end.

Here it is :-)

For reference purposes (-1, Offtopic)

Anonymous Coward | more than 11 years ago | (#5832670)

Recent cognitive research compiled here. [torrentse.cx]

Cognitive systems (-1, Troll)

Anonymous Coward | more than 11 years ago | (#5832694)

I've been using cognitive systems for YEARS.

They come in two flavors - those that are Linux-based and those that are Windows-based.

I would love to use Linux exclusively - unfortunately, all the Linux implementations of cognitive systems are bug-ridden and slow.

As far as the Windows-based implementations go - these are fast and a joy to use. Don't bother with the Linux implementations. They just aren't worth the effort, and like most things Linux, the installation is a bitch and takes HOURS.

I just open-source developers would get off their ass and bring Linux up to speed. I can't, I'm work 100 hours a week.

You can teach a computer to think (3, Insightful)

ObviousGuy (578567) | more than 11 years ago | (#5832695)

But doing so doesn't relieve you of your responsibility to think too.

Re:You can teach a computer to think (0)

Anonymous Coward | more than 11 years ago | (#5832754)

Well, in the UK you can be a criminal at 7 years old, regardless of your parents.

Re:You can teach a computer to think (1, Offtopic)

ObviousGuy (578567) | more than 11 years ago | (#5832763)

I don't think you can be anything without parents.

r00t! (0)

Anonymous Coward | more than 11 years ago | (#5836128)

nt

Re:You can teach a computer to think (0)

Anonymous Coward | more than 11 years ago | (#5832787)

So you're saying that any 7 year old is permitted to be a criminal?

Re:You can teach a computer to think (0)

Anonymous Coward | more than 11 years ago | (#5833066)

I guess I didn't get the memo. Either that or you're talking out of your ass.

Yet another Blog (0, Offtopic)

dcw3 (649211) | more than 11 years ago | (#5832698)

I blog...therefore I is.

The title reminds me of an article in AIR (5, Funny)

Jonathan (5011) | more than 11 years ago | (#5832702)

The Annals of Improbable Research [improb.com] , the humor magazine for scientists, once had an article entitled "Advances in Artificial Intelligence". After the title and author affiliations, the page was appropriately completely blank...

Re:The title reminds me of an article in AIR (5, Insightful)

Anonymous Coward | more than 11 years ago | (#5832851)

The problem is, every time an advance is made in the field of AI, that advance is immediately redefined "not AI". Voice recognition, which you can now walk into a shop and buy software for, is now "not AI". Chess playing, "not AI".

Essentially, AI is used to mean "stuff computers can't do yet".

People say "but the computer's just doing maths". Well, that's the point, isn't it? It might be that an AI powerful enough to be mistaken for human is simply horrendously complex, not unattainable, needing the sum of all those little incremental advances that AI researchers keep making.

Actually,the thing that annoys me most is that people associate Lisp with 80s AI, when in fact modern Common Lisp is an excellent multiparadigm language for all sorts of problems, and a much better fit for large software systems than, say, Java.

Re:The title reminds me of an article in AIR (1)

fferreres (525414) | more than 11 years ago | (#5836477)

It's Humanlike Inteligence now. I don't know why don't they just ditch the name and replace it with HI. Anyway, I seem to be compleining, but I am not really, I do no think chess program have an AI inside, not anything that could be labeled AI today This happens because most people, including me, think only human way of learning and acting can be labeled as "inteligent".

Certainly, we don't label certain insects which behave as machines as inteligent. In fact, everything that resembles machine like behaveour can't be AI by definition. In the essense of our definition is that we think of ourselves as non-machines, when in reality we are some kind of elegant and powerfull machine, with some special characteristics (we are not hardcoded that much)...

Re:The title reminds me of an article in AIR (3, Insightful)

alienmole (15522) | more than 11 years ago | (#5837065)

The problem is actually largely self-inflicted by AI researchers, who have at various times used AI as a gee-whiz hook to justify all sort of research that are at best, peripheral applications of a weak form of AI. Even scientists pay a price for overuse of hype.

"Real" AI would emphasize the "intelligence" part and be capable of, for example, learning the rules of a new game or process from a natural language description and trial and error, and then being able to perform said process. Anything less is pretty much just dicking around with heuristics.

Anyone who ever claimed that machine vision or chess playing or voice recognition was AI, was either confused or guilty of the charge in the first paragraph above. Even before those things were first achieved, the people actually working on them had a pretty good idea of how they could be achieved without anything like what we normally consider intelligence - and they went on to prove it.

Re:The title reminds me of an article in AIR (2, Informative)

KingJoshi (615691) | more than 11 years ago | (#5837523)

I think you slight the progress made, the difficulty of the task and overgeneralize on the AI community (on purposes and approaches of various people). You also have a clarity on "intelligence" which tends not to be so clear.

But I'd like to bring to your attention a research project going on at my school (Michigan State University) which I think is different from other "AI". I didn't see it mentioned from glancing the article.

The attempt to is create a robot that learns and develops as a baby would. A key point is that it develops its own representation of the world. I disagree on some issues with the professor, but I think he has the right general idea.

Here's a link [msu.edu] to slides explaining the approach and another link [msu.edu] to the main research page.

Re:The title reminds me of an article in AIR (2, Interesting)

alienmole (15522) | more than 11 years ago | (#5838000)

Thanks for the links.

I don't mean to slight the progress made, and I also didn't mean to criticize all AI researchers.

Perhaps a better way to describe what I was getting at is that there's an unfortunate feedback effect that happens with these advanced applications, where: researchers say things which excite the general public because they describe things that sound amazing and desirable; researchers notice said excitement and connect that with increased funding; researchers exploit excitement by attaching loaded buzzwords like "AI" to all sorts of vaguely related research projects. But what the public heard or believed initially, is never actually delivered, and what is delivered doesn't seem nearly as exciting as the original vision. If not for this effect, the first "AI crash" would never have happened.

Of course, what I've described is to some extent how the promition of just about any project or product works. The difference with advanced applications like AI is that the ultimate end goals - which are often brought up as justifications for the work - are so far from achievability that expectations are dashed much more than usual when the projects finally reach some kind of fruition - if they ever do. Much of the audience then feels as though it was burned, and could care less about the fact that "real" AI is so much harder than any other software that's been developed to date. They simply perceive that what was "promised" was not delivered.

Like public companies which learned to carefully manage their earnings so as to remain in line with Wall Street expectations, researchers in these fields need to be careful about expectations management if they're going to promote their projects publicly - unless they have something concrete they're going to be delivering in a finite and predictable timescale.

I do think a term like "Cognitive Systems" is much less likely to suffer from these kinds of problems. Many things which could reasonably be called cognitive systems research could not, without significant qualification, really be called artificial intelligence research.

Re:The title reminds me of an article in AIR (0)

Anonymous Coward | more than 11 years ago | (#5841545)

I think you're missing the history of AI - check out Peter Norvig's "AI: A Modern Approach" some day, the end of each chapter has a "historical perspectives" (or similarly named) section - in each bit, you tend to see the same pattern- People _didn't_ have a good idea of some of the stuff that was once AI and now isn't - it was only when AI researchers did it that they discovered it was "just maths".

It's easy NOW to say "anyone who ever claimed...was confused...". But what the AI researchers were/are doing was/is removing that confusion when EVERYONE was/is confused.

And personally, as far as I can see, most of the much vaunted human "intelligence" is "just dicking around with heuristics".

Intelligence and learning (1)

alienmole (15522) | more than 11 years ago | (#5846747)

People _didn't_ have a good idea of some of the stuff that was once AI and now isn't - it was only when AI researchers did it that they discovered it was "just maths".

Some people had a better idea than others, though. I don't think much fundamental has changed in terms of our general understanding of what does and doesn't constitute intelligence, since at least about the late '70s, but there've still been questionable AI-related claims in that timeframe.

And personally, as far as I can see, most of the much vaunted human "intelligence" is "just dicking around with heuristics".

Those are just the bits that aren't real AI... ;o)

Lisp: The Next Generation! (2, Insightful)

fm6 (162816) | more than 11 years ago | (#5845854)

Actually,the thing that annoys me most is that people associate Lisp with 80s AI,
What annoys me is that people refer to Lisp as a "fifth generation language", even though it's the second oldest high-level language (after Fortran). But that's not as annoying as calling Visual Basic a "fourth generation language" because of its database features.

All of which is a secondary result of another case of 80s hype. Declarative languages, such as SQL, were sold as "fourth generation" because they were supposed to make procedural languages ("third generation languages") obsolete. Which didn't happen of course. Declarative programming ended up supplementing older languages, not replacing them.

After a while the original meaning was forgotten. So now people call languages "4GLs" etc. to emphasize some vague claim that they're more advanced. Or because of a vague notion that 4GL has something to do with database programming. These are terms we should just stop using.

Real world problems and neuroscience (5, Interesting)

Neuronerd (594981) | more than 11 years ago | (#5832710)

The great thing about the recent development in so-called cognitive systems is that they start to address more real problems. The time of toy problems is over. It is not enough to just follow a line. Only the challenge from the real world can make algorithms in any way "clever" or meaningful.

This is why I find it truly inspiring that so much research is going into these systems these days.

Sadly however most of neuroscience these days is still far from these questions. Most electrophysiologists that for example study the visual system show it trivial stimuli such as bars or gratings. In some sense a system can only show its capability when the stimuli are rich enough.

Nevertheless there is clearly a move these days towards larger more interesting problems even in neuroscience. We should be inspired by the works of the roboticists.

Re:Real world problems and neuroscience (5, Interesting)

PWBDecker (452734) | more than 11 years ago | (#5832791)

I don't know if you intend to be rhetorical, but irregardless I'm going to reply to clarify the absurdity of what you just said. Of course a system will truly flourish and develop when exposed to more complex stimuli, but the purpose and value of exposing such a system (such as our own vision or cognitive systems) to SIMPLE stimuli such as basic geometric shapes or simple patterns is to help develop an understanding of how they process such stimuli in order to develop a more complex model. I don't think very much progress would be made in the field of vision recognition if we simple attached as many neurodes as we could to a persons head and monitor the feedback while they read a book.

The problem with neural systems, as anyone who knows how a basic neural net works, is that they are fully distributed information processing systems, and no particular physical location is responsible for any particular processing task. In fact research is being conducted to allow a large number of neural nets to develop their own internal relay systems, entirely beyond human control, to see an entire cognitive framework can develop on its own (much as it does in nature). For those interested, check out http://www.genobyte.com/, their CAM-Brain machine is, in my oppinion, one of the best developed attempts to develop a natural intelligence system.

Decker
Time to lose myself again.

Re:Real world problems and neuroscience (0)

Anonymous Coward | more than 11 years ago | (#5835783)

irregardless

Dude, that's like a double-negative or something.

Re:Real world problems and neuroscience (1)

the_hose (120374) | more than 11 years ago | (#5856910)

Look it up, dude...

There is a prissy, "language must remain static" camp that refuses to acknowledge the validity of "irregardless". However, it is commonly understood to connote a more emphatic version of "regardless".

Re:Real world problems and neuroscience (1)

Paradise Pete (33184) | more than 11 years ago | (#5866709)

However, it is commonly understood to connote a more emphatic version of "regardless".

I believe it's actually commonly understood to connote a writer unable to parse the very words he uses. Of course language evolves. But this is not a valid mutation unless the meaning of the prefix "ir" also changes.

Inspired (3, Funny)

maharg (182366) | more than 11 years ago | (#5832800)

The ultimate goal of the RoboCup project is by 2050, develop a team of fully autonomous humanoid robots that can win against the human world champion team in soccer

Now THAT's a goal.

Maybe we'll see humanoid robot referees in sports. That should stop any dissent from the players ,-}

Player: C'mon ref, that was never in a million years a f**king penalty !!
Ref: You have 3 seconds to comply..

Re:Inspired (1)

sigep_ohio (115364) | more than 11 years ago | (#5834911)

Definitely wouldn't see many assaults on Robo-Ref!

Re:Inspired (1)

Tablizer (95088) | more than 11 years ago | (#5853570)

Definitely wouldn't see many assaults on Robo-Ref!

Actually, they should make the robo-ref semi-fragile. No better way to boost ratings than to let John McEnrow or Shaq beat the chips out of a ref. You can't do that with human refs. They could also beat the stuffing out of robo-mascots also.

Re:Real world problems and neuroscience (4, Interesting)

TomorrowPlusX (571956) | more than 11 years ago | (#5834202)

Sadly however most of neuroscience these days is still far from these questions.

This is interesting to me, for several reasons. I'm working on robotics in my free time, mainly not cognitive stuff but lower level autonomous muscular control and feedback loop stuff. But anyway, my girlfriend's studying neuroscience and she, like many (too many) of her peers, finds absolutely NOTHING interesting in cognitive research.

All they care about is the mechanics (which is important) but I think they consider cognition to be a peculiar but unimportant side effect of the rest of the complex process.

So, as a fellow who's spent years writing code to try to do intelligent stuff, and more recently robots to carry these actions out, it's somewhat frustrating to be in a bar with a bunch of neuroscientists and hear them dismiss cognition as irrelevant.

Maybe slashdot could use a cognitive system... (4, Funny)

1337_h4x0r (643377) | more than 11 years ago | (#5832747)

to detect dupes!

Re:Maybe slashdot could use a cognitive system... (1)

dbglt (668805) | more than 11 years ago | (#5832864)

Why would slashdot want to detect dupes? If I'm not mistaken - most authors are proud to publish dupes in the true slashdot style :)

I know... maybe once the cognitive system is all figured out (if it ever is :) - we can use it to post dupes! With different names related to the topic!

We love you slashdot

Re:Maybe slashdot could use a cognitive system... (2, Insightful)

kwench (539630) | more than 11 years ago | (#5833624)

Both things - detecting dupes and creating dupes - should be a simple thing today. Concerning the first one, you could easely use a spam filter [paulgraham.com] , modify it, and run it over all new posts. Whenever something bears high similiarity to a former /. article, the system should print out a dupe-warning. Of course, sometimes there is wanted similiarity between different posts (like Mozilla 1.0 is out, Mozilla 1.1 is out, Mozilla 1.2 is out, Mozilla, 1.3 is out... and, guess what? Mozilla 1.4 alpha is out!)

Slashdot *does* use a cognitive system.... (2, Funny)

alienmole (15522) | more than 11 years ago | (#5837152)

It's called CmdrTaco. It's quite a sophisticated model in some respects, but it contains all sorts of spelling bugs and a strange fixation on anime and pr0n, sometimes at the same time (although only when MrsKathleenTaco isn't watching). Perhaps due in part to these irrelevant fixations, CmdrTaco's dupe-detection capability is notoriously flaky. This new cognitive research may finally allow CmdrTaco's pattern recognition systems to be upgraded, though.

But don't get your hopes up - when they attempted to upgrade JonKatz with an expanded repertoire of once-wired-now-tired cliches, the result was disastrous, and the unit had to be retired. Some upgrades are simply beyond our current technology...

On Combining Sensory and Symbolic Information (5, Insightful)

slinted (374) | more than 11 years ago | (#5832755)

Having a system combine both symbolic logic systems and sensory systems is mentioned in the article as a major focus of research today, but I wonder why this has been split so specifically...maybe someone can help me to understand.

The point at which an understanding of body position is integrated with an overall structure of behavior leading towards a goal seems a mirage, since this isn't necessarily the way animal systems work. The best recreation of natures flexibility in "simple" systems that I've heard of comes from Mark Tilden's analog systems [cnn.com] that are controled by tight-loops of feedback that very closely model reflex circuits, but that are capable of recovering from intense deformations of "perfect positioning".

Now, obivously, reflex systems can only go so far, when you have a bot that you want to decide path across a room, there has to be a symbolic understanding of its environment. But it seems to me, from my (albeit very limited) understanding of insect / lower-animal inteligence, that most insects don't actually work up a full symbolic understanding of their surroundings, they just have some sort of sense of direction towards a goal (think moths to light) and then they start the reflex circuits firing to move towards it. I can understand having an end goal of having a full cognitive system comparable to human understanding of the world, but it seems like people might be overshooting the process a bit. We need a greater understanding of the simple systems before we can hope to frog-leap to the big stuff.

To dispute my own point though, I feel its fair to say that the "simple" systems of the animal brain are already currently being modeled [newscientist.com] to the point that prosthesis for the brain might just be within reach. The success of an artificial hipocampus will prove that modeling the brain isn't necessarily understanding the brain, but it might be easier to learn the systems from our artificial models than the real ones.

Re:On Combining Sensory and Symbolic Information (0)

Anonymous Coward | more than 11 years ago | (#5835351)

spot on, my thoughts exactly. Why this preoccupation with logical reasoning? it makes so much more sense to dwell into basic brain function first, before ever hoping to understand how the brain reasons. After all, as any student of computer science must agree to, making formal logical proofs require *attention* and *effort*, however we as humans obviously don't make an effort deducting rules of traffic when driving through down town Springfield!

Re:On Combining Sensory and Symbolic Information (1)

Tablizer (95088) | more than 11 years ago | (#5853605)

Now, obivously, reflex systems can only go so far

But I can almost swear that is how some managers function.

can't resist (3, Funny)

StuartFreeman (624419) | more than 11 years ago | (#5833003)

Johnny five is alive!

Somewhat Relevant Plug... (4, Informative)

Yoda2 (522522) | more than 11 years ago | (#5833134)

Experience-Based Language Acquisition (EBLA) is an open source software system written in Java that enables a computer to learn simple language from scratch based on visual perception. It is the first "grounded" language system capable of learning both nouns and verbs. Moreover, once EBLA has established a vocabulary, it can perform basic scene analysis to generate descriptions of novel videos.

A more detailed summary is available here [osforge.com] and this [greatmindsworking.com] is the project web site.

Compared to proprietary systems such as Ai's HAL [bbc.co.uk] , Meaningful Machines Knowledge Engine [prnewswire.com] , and Lobal Technologies LAD [silicon.com] , EBLA is the only system to incorporate grounded/perceptual understanding of language.

What I'd like to hear more about (2, Interesting)

truthsearch (249536) | more than 11 years ago | (#5833152)

While this is all very interesting and becoming more practical for everyday use, we don't hear enough about the stuff that's related but not quite bleeding edge. We know there are people trying to create intelligent systems such as for language understanding and intelligent web searching, but it seems we don't hear much about them. I'm wondering if it's because most of that is being done within corporations while much of this bleeding edge research is done by universities.

These are all pathetic (0)

Anonymous Coward | more than 11 years ago | (#5833612)

I think this is a little euro-centric, concentrating on systems that were developed at North American schools long ago...

Cogsci? (1)

spaceman18 (640318) | more than 11 years ago | (#5833720)

""My experience with cogsci is that it's really about understanding thought, not about making machines. I think it really depends on where you are. If you're at MIT, it's probably machines."" ahem. not at all. usuing MIT as an example the dept of Brain and Cognitive Sciences is more concerned with human cognition. granted there are some poeple who are more in the AI area, but many are not. (i was one of them) most schools who have an offical cognitive science dept or program mostly have cogntiive psychologist or cognitive neuroscienctist in it. (see UCSD, UC-Berkeley, MIT, Urochester) The bottom line is that in my 5 or so years as a cognitive scientist i can tell you that there is a VERY wide range of type of reseachers to claim to be cognitive scientist. They all work on the same problem but from a different angle. The problem being intelligence/thought. be it looking at human thought, or trying to create something like it. Cognitive scientist come from areas such as neuroscience, AI, Linguistics, psychology, ect.

technically speaking... (0)

Anonymous Coward | more than 11 years ago | (#5834123)

he whole page weighs 217 KB

...it doesn't weigh shit. Neither does it really occupy any space, in the truest sense of the word. It's virtual.

#;^)

Not a great read (4, Informative)

Illserve (56215) | more than 11 years ago | (#5834191)

I was disappointed by the 5 articles I read and stopped reading. It basically reads like a catalog of the projects and techno-terms that are being performed with very little actual content.

Basically each one boiled down to: our lab does the XYZAB project and we're studying this system.

*sigh* (0)

Anonymous Coward | more than 11 years ago | (#5834491)

It's sad to see that one of the more thought-provoking "lasting impact" stories on the site has such low traffic...

bad science (1)

andy666 (666062) | more than 11 years ago | (#5835099)

i don't see what this has to do with mathematics. the algorithms are all heuristic. my experience is that such things tend to not work outside a narrowly defined environment. i think it is ashame that good engineering work in robotics gets ignored because it doesn't look to the public as sexy as this stuff. like rigid body contact - it's really hard to have a robot pick something up. now there is actual mathematics.

Developer lashes out: What Killed FreeBSD (0)

Anonymous Coward | more than 11 years ago | (#5836345)

The End of FreeBSD

[ed. note: in the following text, former FreeBSD developer Mike Smith gives his reasons for abandoning FreeBSD]

When I stood for election to the FreeBSD core team nearly two years ago, many of you will recall that it was after a long series of debates during which I maintained that too much organisation, too many rules and too much formality would be a bad thing for the project.

Today, as I read the latest discussions on the future of the FreeBSD project, I see the same problem; a few new faces and many of the old going over the same tired arguments and suggesting variations on the same worthless schemes. Frankly I'm sick of it.

FreeBSD used to be fun. It used to be about doing things the right way. It used to be something that you could sink your teeth into when the mundane chores of programming for a living got you down. It was something cool and exciting; a way to spend your spare time on an endeavour you loved that was at the same time wholesome and worthwhile.

It's not anymore. It's about bylaws and committees and reports and milestones, telling others what to do and doing what you're told. It's about who can rant the longest or shout the loudest or mislead the most people into a bloc in order to legitimise doing what they think is best. Individuals notwithstanding, the project as a whole has lost track of where it's going, and has instead become obsessed with process and mechanics.

So I'm leaving core. I don't want to feel like I should be "doing something" about a project that has lost interest in having something done for it. I don't have the energy to fight what has clearly become a losing battle; I have a life to live and a job to keep, and I won't achieve any of the goals I personally consider worthwhile if I remain obligated to care for the project.

Discussion

I'm sure that I've offended some people already; I'm sure that by the time I'm done here, I'll have offended more. If you feel a need to play to the crowd in your replies rather than make a sincere effort to address the problems I'm discussing here, please do us the courtesy of playing your politics openly.

From a technical perspective, the project faces a set of challenges that significantly outstrips our ability to deliver. Some of the resources that we need to address these challenges are tied up in the fruitless metadiscussions that have raged since we made the mistake of electing officers. Others have left in disgust, or been driven out by the culture of abuse and distraction that has grown up since then. More may well remain available to recruitment, but while the project is busy infighting our chances for successful outreach are sorely diminished.

There's no simple solution to this. For the project to move forward, one or the other of the warring philosophies must win out; either the project returns to its laid-back roots and gets on with the work, or it transforms into a super-organised engineering project and executes a brilliant plan to deliver what, ultimately, we all know we want.

Whatever path is chosen, whatever balance is struck, the choosing and the striking are the important parts. The current indecision and endless conflict are incompatible with any sort of progress.

Trying to dissect the above is far beyond the scope of any parting shot, no matter how distended. All I can really ask of you all is to let go of the minutiae for a moment and take a look at the big picture. What is the ultimate goal here? How can we get there with as little overhead as possible? How would you like to be treated by your fellow travellers?

Shouts

To the Slashdot "BSD is dying" crowd - big deal. Death is part of the cycle; take a look at your soft, pallid bodies and consider that right this very moment, parts of you are dying. See? It's not so bad.

To the bulk of the FreeBSD committerbase and the developer community at large - keep your eyes on the real goals. It's when you get distracted by the politickers that they sideline you. The tireless work that you perform keeping the system clean and building is what provides the platform for the obsessives and the prima donnas to have their moments in the sun. In the end, we need you all; in order to go forwards we must first avoid going backwards.

To the paranoid conspiracy theorists - yes, I work for Apple too. No, my resignation wasn't on Steve's direct orders, or in any way related to work I'm doing, may do, may not do, or indeed what was in the tea I had at lunchtime today. It's about real problems that the project faces, real problems that the project has brought upon itself. You can't escape them by inventing excuses about outside influence, the problem stems from within.

To the politically obsessed - give it a break, if you can. No, the project isn't a lemonade stand anymore, but it's not a world-spanning corporate juggernaut either and some of the more grandiose visions going around are in need of a solid dose of reality. Keep it simple, stupid.

To the grandstanders, the prima donnas, and anyone that thinks that they can hold the project to ransom for their own agenda - give it a break, if you can. When the current core were elected, we took a conscious stand against vigorous sanctions, and some of you have exploited that. A new core is going to have to decide whether to repeat this mistake or get tough. I hope they learn from our errors.

Future

I started work on FreeBSD because it was fun. If I'm going to continue, it has to be fun again. There are things I still feel obligated to do, and with any luck I'll find the time to meet those obligations.

However I don't feel an obligation to get involved in the political mess the project is in right now. I tried, I burnt out. I don't feel that my efforts were worthwhile. So I won't be standing for election, I won't be shouting from the sidelines, and I probably won't vote in the next round of ballots.

You could say I'm packing up my toys. I'm not going home just yet, but I'm not going to play unless you can work out how to make the project somewhere fun to be again.

= Mike

--

To announce that there must be no criticism of the president, or that we are to stand by the president, right or wrong, is not only unpatriotic and servile, but is morally treasonable to the American public. -- Theodore Roosevelt

AI is a vague misleading term (1)

ahkbarr (259594) | more than 11 years ago | (#5856226)

AI has yet to even define itself, and hence, is troubled by the most fundamental disagreements about just exactly what its questions are, leaving doubt about whether the answers could even be found given the state of the art.

All I ever hear about when folks brag about "advances in AI" are things like some new algorithm which can interpret some form of input which it previously could not, or new theories of machine learning, etc.

No one yet has effectively defined the mechanics which make up "the mind". Folks still argue about whether its even important.

All I want to see is some sort of AI where glorified robotics engineers do not get labeled as "AI research scientists". Classify, and move on. I want to see advances in cognitive neuroscience! Not vague references to AI that make me click the link and be dissapointed.
Check for New Comments
Slashdot Login

Need an Account?

Forgot your password?
or Connect with...

Don't worry, we never post anything without your permission.

Submission Text Formatting Tips

We support a small subset of HTML, namely these tags:

  • b
  • i
  • p
  • br
  • a
  • ol
  • ul
  • li
  • dl
  • dt
  • dd
  • em
  • strong
  • tt
  • blockquote
  • div
  • quote
  • ecode

"ecode" can be used for code snippets, for example:

<ecode>    while(1) { do_something(); } </ecode>