Beta
×

Welcome to the Slashdot Beta site -- learn more here. Use the link in the footer or click here to return to the Classic version of Slashdot.

Thank you!

Before you choose to head back to the Classic look of the site, we'd appreciate it if you share your thoughts on the Beta; your feedback is what drives our ongoing development.

Beta is different and we value you taking the time to try it out. Please take a look at the changes we've made in Beta and  learn more about it. Thanks for reading, and for making the site better!

How Machine Learning Will Change Augmented Reality

CmdrTaco posted more than 3 years ago | from the i-can't-even-people-learn dept.

Hardware 101

An anonymous reader writes "Augmented reality is already adding digital information to the world around us — but the next step to making it truly useful will be when it starts to use elements of machine learning to understand the real world, Mike Lynch, boss of machine learning software specialist Autonomy told silicon.com — also explaining machine learnings links with the theorems devised by 18th century cleric Thomas Bayes."

cancel ×

101 comments

Sorry! There are no comments related to the filter you selected.

Evolution is a great trainer (4, Interesting)

Toe, The (545098) | more than 3 years ago | (#35143150)

(Disclaimer: I have nothing to do with this site, and it is non-commercial as far as I can tell.)

This leads me to pimp my favorite new site/game/lesson... what is this? It's cool, that's all. Check out this neat implementation of a genetic algorithm to produce a cool demonstration computer-generated evolution: http://www.boxcar2d.com/ [boxcar2d.com]

Re:Evolution is a great trainer (2)

guybrush3pwood (1579937) | more than 3 years ago | (#35143344)

What the fuck is that shit? How many millions of years should I wait unit I get a working bicicle? I'll run out of battery!

Re:Evolution is a great trainer (2)

Superken7 (893292) | more than 3 years ago | (#35143448)

Wow, I already knew about that 2D car experiment but just re-discovered it with a far more interactive design, thanks!

Someone should make an app for that (not kidding) that lets you be the designer of a car and makes your car compete against other user's designs, producing an online top ranking, friends ranking (you have been ousted!), etc... Maybe there is even something more sophisticated than building 2D cars that would make a great game...
(Yes i know the website lets you design a car, thats where I got my idea from, but it won't compete against others & no rankings)

This is one of those projects where the "Wish I had more free time"-thought comes to mind :-(
Anyone? =D

Re:Evolution is a great trainer (1)

Toe, The (545098) | more than 3 years ago | (#35143952)

Try turning max wheels down to zero. In the comments, people are reporting that they end up with spoke-like-contraptions that evolve to roll without wheels!

Re:Evolution is a great trainer (0)

Anonymous Coward | more than 3 years ago | (#35144362)

2 flash objects on the page, hit play on both (use flashblock) and nothing ever comes up. AWESOME....

Re:Evolution is a great trainer (1)

Mandelbrot-5 (471417) | more than 3 years ago | (#35144686)

This reminds me of an old simulation like this that was done in 3d. At the time it had to be run on a uni mainframe, but this was years ago... would love to find that and see what could be done with my desktop rig.

Re:Evolution is a great trainer (0)

Anonymous Coward | more than 3 years ago | (#35144736)

You mean something like this? http://www.stellaralchemy.com/lee/index.php [stellaralchemy.com]

Re:Evolution is a great trainer (0)

Anonymous Coward | more than 3 years ago | (#35148064)

Way too small population and each generation takes forever. Poor usage of GA.

Heads up (0)

Anonymous Coward | more than 3 years ago | (#35143154)

Link goes to page 2. navigate to page 1 before reading the article or you'll be confused like I was

Re:Heads up (2)

PPH (736903) | more than 3 years ago | (#35144672)

A computer wouldn't have made that mistake.

First thing to learn (-1)

Anonymous Coward | more than 3 years ago | (#35143170)

How to use the FUCKING APOSTROPHE since it's obviously beyond human comprehension. We'll know we're in trouble when machines know how to use the ', they'll have outsmarted 95% of humanity.

Re:First thing to learn (0)

Anonymous Coward | more than 3 years ago | (#35143246)

your crazy. theirs no problem s'long as you understand my meaning

Re:First thing to learn (0)

Anonymous Coward | more than 3 years ago | (#35143372)

U R S0 DUMR!!!

Re:First thing to learn (1)

tehcyder (746570) | more than 3 years ago | (#35148262)

leave your schlong out of this, you pervert

They might get too smart (0)

Drakkenmensch (1255800) | more than 3 years ago | (#35143182)

One day some fool will ask a machine to figure how to rescue the environment and fix it for us, and the machines will figure out that humans are the ones who destroy it... let's hope that Newton's Laws come standard by that point, otherwise we'll be in deep deep trouble.

Newton's Laws? (4, Insightful)

VirginMary (123020) | more than 3 years ago | (#35143270)

I think you meant Asimov's Laws of robotics! I doubt classical physics has anything to do with it.

Re:Newton's Laws? (3, Funny)

Toe, The (545098) | more than 3 years ago | (#35143428)

No, no. Learn your history. It's Asimov's version of Newton's Laws.

The full form goes something like:

1. Every robot must remain in a state of constantly not injuring humans or causing them to become injured through the robot's state of rest.

2. Any robot, subject to a force in the form of an order by a human undergoes an acceleration in the form of obeying the order as long as it does not contradict the first law.

3. The mutual forces of action and reaction between a robot and another object must not allow that other object to end the existence of the robot, provided this doesn't conflict with the first two laws.

I think that's it. Wikipedia probably has the full version.

Re:Newton's Laws? (1)

VirginMary (123020) | more than 3 years ago | (#35144066)

Never seen this before. Also couldn't find it on Wikipedia. Reference, please!

Re:Newton's Laws? (0)

Anonymous Coward | more than 3 years ago | (#35147852)

http://en.wikipedia.org/wiki/Three_Laws_of_Robotics

Re:They might get too smart (1)

just_another_sean (919159) | more than 3 years ago | (#35143280)

Asimov's Laws? Unless, is there something about motion, gravity and robotics that has passed me by all these years?

Re:They might get too smart (1)

Anonymous Coward | more than 3 years ago | (#35144378)

Conservation of Isaac.

logical end (2)

SethThresher (1958152) | more than 3 years ago | (#35143214)

Look, all I want is my AR glasses to overlay the world into an MMORPG/FPS sim and I'll be good, okay? Call it a reparation for the future not providing me with my own jetpack and/or flying car yet.

NEWS FLASH! (2)

jfengel (409917) | more than 3 years ago | (#35143250)

Nonexistent product will change your life! Film at 11.

(Eleven years from now, that is. We think. Maybe the schedule will slip a bit.)

Call me... (1)

guybrush3pwood (1579937) | more than 3 years ago | (#35143304)

... when it's done, you bastards! I wasted 5 seconds of porn reading the summary.

Still the future? (-1)

Chemisor (97276) | more than 3 years ago | (#35143322)

Artificial intelligence has been the technology of the future for over 40 years. The current state of it is pathetic, and there is no significant research going on. So it will continue to be the technology of the future for a very very long time.

Re:Still the future? (2, Insightful)

Anonymous Coward | more than 3 years ago | (#35143390)

Can you back up the claim that there is no significant research going on? Maybe the real problem here is your definition of "significant", which BTW you also forgot to specify.

Re:Still the future? (3, Informative)

Anonymous Coward | more than 3 years ago | (#35143398)

"current state of it is pathetic"
AI beats humans at chess and jeopardy. It solves difficult puzzles much quicker than you or I could. Maybe it's "pathetic" compared to what you would like it to be, but it's far from pathetic.

"there is no significant research going on"
Please do the tiniest bit of research before posting garbage like this. Again though, you include another relative term like "significant" to cover yourself.

Lame post.

Re:Still the future? (3, Informative)

Skarecrow77 (1714214) | more than 3 years ago | (#35143404)

That's because John Conner has come back from the future and saved us at the last possible moment from significant breakthroughs in AI research no less than 17 times now, you ingrate!

Re:Still the future? (2)

pclminion (145572) | more than 3 years ago | (#35143916)

AI is always in the future, almost by definition. As soon as any new "artificial intelligence" algorithm goes mainstream, and starts to be incorporated into real products, people tend to stop thinking of it as AI and more of "just what those computer thingies do." So it's you who keeps changing the definition of what AI is, it's not a lack of forward progress.

AI tends to be "whatever the most advanced stuff is that people are working on." I hate the terminology, as do you apparently, but a problem with terminology doesn't mean we're not inventing anything.

Re:Still the future? (1)

narcc (412956) | more than 3 years ago | (#35147046)

So it's you who keeps changing the definition of what AI is, it's not a lack of forward progress.

It depends on your perspective. Lot's of things that once would not have been considered AI now fall under that umbrella. Expert systems come immediately to mind. From that point of view, it's the field of AI that keeps changing its definition.

What the GP likely considers AI is the original goal of AI -- to create intelligent machines. To that end, the GP is correct, we're not any closer to solving that problem than we were in the 1950's.

On his claim that there is no "significant research" going on, he's WAY off base.

Re:Still the future? (2)

Palpatine_li (1547707) | more than 3 years ago | (#35143950)

I wouldn't call Watson pathetic. Also, how far have we already pushed the boundary of 'intelligence'? I mean, playing chess or basically anything easier than go is no longer considered intelligence, limited parsing of sentences is no longer considered intelligence. How long before we find out that nothing is left to be called 'intelligence'?

Re:Still the future? (1)

medv4380 (1604309) | more than 3 years ago | (#35144144)

I mean, playing chess or basically anything easier than go is no longer considered intelligence, limited parsing of sentences is no longer considered intelligence. How long before we find out that nothing is left to be called 'intelligence'?

You're missing the point. Currently all the AI out there is Top down AI and they can hardly be considered Intelligent and over emphasis the Artificial. Show me an AI that learns to play Chess all on its own starting from nothing but the board and pieces, then I'll consider it intelligent. What you are pointing out as AI are really nothing more then massive lookup tables. Heck even most GA are nothing more then an elaborate way to create a lookup table that will work for a given game or condition. Deep Blue was nothing more then an expensive database since the programmers had to tweak with it in-between games. It failed the basic point of Intelligence and that would be to learn on its own. If my parents had to open up my brain every time so that I could "Improve" I'm sure they would have lobotomized me by now.

Re:Still the future? (2)

mswhippingboy (754599) | more than 3 years ago | (#35144828)

You are under the commonly held impression that intelligence requires some special magical ingredient. It does not. The human brain works within the laws of physics. It is simply a machine, albeit built out of organic material. While we don't fully understand all the details, we have made great leaps in the understanding of the mechanisms that drive it over the years. We haven't managed to create a machine with a generalized "intelligence" on par with a complete human brain yet, but we have been nibbling at the edges with vision, voice recognition, logic inference and other areas.

I suspect we won't just wake up one morning and hear on the news that someone has created an intelligent machine. It will creep into all the devices we interact with on a daily basis (maybe even embedded in our bodies) and at some point the question of whether a device is intelligent or not will be mote. If you can have an intelligent conversation with your device (maybe even more intelligent than with one of your friends) will it even matter whether it's a person or a device you are conversing with. If it can understand you and reply with a reasoned response, do you care how much "brute force" is employed under the covers?

AI is far more advanced than you realize and advancing at an accelerating pace, partly due to hardware, miniturization, software and communications.

Re:Still the future? (1)

narcc (412956) | more than 3 years ago | (#35147080)

You are under the commonly held impression that intelligence requires some special magical ingredient.

I don't see how you can come to this conclusion given what the GP has written.

AI is far more advanced than you realize and advancing at an accelerating pace

For this, a citation is necessary.

Re:Still the future? (1)

mswhippingboy (754599) | more than 3 years ago | (#35147664)

You are under the commonly held impression that intelligence requires some special magical ingredient.

I don't see how you can come to this conclusion given what the GP has written.

The statement that somehow because the technology uses "lookup tables" or some other mundane algorithms as part of it's logic disqualifies it from being intelligent implies that it has to perform it's task in some "special" way. For example, any reasonable person would consider the AI in modern speech recognition systems to be far more sophisticated and advanced than it was 40 years ago. The GGP indicates that he/she would not consider this intelligence because it's "Top Down AI", by which I assume he/she means that it is implemented by performing some combination of lookups and rule processing. While speech recognition is actually a lot more sophisticated than the GGP is giving credit for (combining frequency, formant, semantic and context analysis with neural networks, support vector machines and/or bayesian networks), even if it did it's job using a massive set of "if" statements it is still considered intelligent if it's behavior is intelligent. My smartphone is far superior at speech recognition than my dog. I would agree that in general, a dog is more intelligent than a smartphone, but it is a matter of degree and of context.

Also, the ability to learn is not a prerequisite to intelligence. It's definitely a "nice to have" feature so that the existing intelligence can be enhanced but it's not a requirement. Even an amoeba has a certain amount of intelligence imparted to it through genetics. It has the ability to sense where more nutrition is in it's environment and then use it rudimentary locomotion to move to the area of higher nutritional concentration. It has no ability to learn, but it does exhibit this very basic intelligent behavior.

AI is far more advanced than you realize and advancing at an accelerating pace

For this, a citation is necessary.

You need a citation for this? Have you navigated though any voice response phone systems recently? They're still far from perfect, but compared to what they were just a couple of years ago they are pretty amazing. I've followed speech recognition for over 30 years and it took about 20 of these to just get speaker independent systems over about 80% accuracy. Five years ago these systems did a pretty good job if you spoke clearly and were up into the 90-95% accuracy range. Now I can just blurt a few words for a Google search on my smartphone in a noisy car and damn if it doesn't get it right almost all the time.

It took up until recently to get face recognition systems to be able to recognize a particular face while sitting right in front of the camera. Now these systems can pick a face out in a crowd. Smartphone will soon be coming out with the ability to recognize the user's face and configure it's preferences accordingly.

Medical diagnostic systems powered using AI technologies have long been able to exceed real doctors in the accuracy of their diagnosis (on an average basis), but they were not widely used because they could only achieve somewhere in the 90% range, and when dealing with lives that's just not good enough. However recent developments have produced systems that are so far superior to real doctors, it's getting more and more difficult for doctors to blow them off and many are turning to these systems if for no other reason than to get a second opinion.

The military is using sophisticated UAVs that navigate on their own and perform all sorts of complicated maneuvers without any human intervention.

Companies (e.g. Google, Apple, Amazon, etc.) use sophisticated AI to determine what you might like to see or hear or read ahead of your asking for it.

I could go on but you get the point. AI is in use everywhere.

Everytime the topic of AI comes up on /., there are always the same refrains about how what a failure AI was and how it hasn't advanced in 40/50 years. I say tha'ts a load of crap. The problem is (as others have pointed out), whenever an AI technology goes mainstream, people no longer consider it AI, but that is not a problem with the technology, rather it's a problem with the understanding of AI by those making the statements.

Re:Still the future? (1)

narcc (412956) | more than 3 years ago | (#35147762)

he statement that somehow because the technology uses "lookup tables" or some other mundane algorithms as part of it's logic disqualifies it from being intelligent implies that it has to perform it's task in some "special" way.

Doesn't this (including the unquoted remainder of the paragraph and the paragraph that follows) broaden the definition of AI to include nearly all information processing? I'm not sure where you're drawing the line between what is AI and what is not.

It is entirely possible that the GP simply holds a narrower view, not that he believed intelligence to be magical.

it is still considered intelligent if it's behavior is intelligent.

This is a philosophical statement to which I'm not certain I can agree without a better definition of intelligent behavior. If the behavior of the amoeba in your previous example qualifies I can't agree as it makes the definition of what can be considered intelligent virtually meaningless. (If sensing and responding to an environment qualifies, then my thermostat must be considered intelligent.)

I won't ask you to define intelligence -- it's a very slippery word -- just what qualifies, in your opinion, as intelligent behavior.

I could go on but you get the point. AI is in use everywhere

That wasn't your original claim. The claim was that AI was advancing at an accelerated pace. (I don't know how you'd measure the rate) I don't disagree that what is now being termed AI is in use in many places.

Re:Still the future? (1)

mswhippingboy (754599) | more than 3 years ago | (#35148054)

Doesn't this (including the unquoted remainder of the paragraph and the paragraph that follows) broaden the definition of AI to include nearly all information processing?

No, it doesn't, although there is not universal definition of what AI is, so it tends to be whatever the speaker says it is. AI is really a field of computer science, but it receives as input, the disciplines of psychology, cognitive science, neurophysiology, mathematics and many more. AI is not an particular technique or algorithm. The only true test of whether something is intelligent is through it's behavior. It really goes back to the Turing test (http://en.wikipedia.org/wiki/Turing_test). If the appears to the user to exhibit intelligence, it is in fact intelligence. It doesn't matter what programming techniques were employed to produce the result.

This is a philosophical statement to which I'm not certain I can agree without a better definition of intelligent behavior. If the behavior of the amoeba in your previous example qualifies I can't agree as it makes the definition of what can be considered intelligent virtually meaningless. (If sensing and responding to an environment qualifies, then my thermostat must be considered intelligent.)

I won't ask you to define intelligence -- it's a very slippery word -- just what qualifies, in your opinion, as intelligent behavior.

That is my whole point. How long is a string?

In the case of you thermostat, yes, I would consider it to have a certain level of intelligence. A thermostat that can, in addition to sensing the temperature and controlling the furnace, knows about the current utility rates at the particular time of day and adjusts the temperature downward so you can achieve lower power bills would be more intelligent than than the previous one. Intelligence is a continuum, and can range from something as simple as an amoeba to a swarm of bees to a two year old child and up through the greatest geniuses of all time. Every time we add a new feature to a device it increases it's intelligence and this has been going on for quite some time. AI has been essentially sneaking in through the back door and nobody seems to notice (at least not on /.). The story is different if you peruse any of the many AI or cognitive research outlets on the web (journals, universities or AI specific websites).

That wasn't your original claim. The claim was that AI was advancing at an accelerated pace. (I don't know how you'd measure the rate) I don't disagree that what is now being termed AI is in use in many places.

AI, like hardware and software development, observes Moore's law. Each technology (such as speech recognition, visual recognition, etc) that is sufficiently advanced is layered on top of the advances from other areas, resulting in exponential rather than linear advancement.

Take the classic "conversational" AI (like Watson, Eliza, etc) for example. Most of the problems in actual speech recognition have been resolved. Further advances will come from systems that are capable to some degree of "understanding" what is being spoken (semantic analysis) and use this "thought" as input to an inference engine to perform the analyis and generate a response. The response can be fed through a grammar engine and finally to a speech generator which would respond back to the user. Each of these components exist today at various levels of sophistication and are the subjects of much research. As each advances and is paired back with the others, the entire system shows rapid improvement in terms of it's perceived intelligence.

Re:Still the future? (1)

narcc (412956) | more than 3 years ago | (#35148314)

If the appears to the user to exhibit intelligence, it is in fact intelligence.

The Turing test is not only highly specific, it's more than a little contentious.

Your particular interpretation of the TT has lead you to develop this strange method of assessment that lets you ascribe intelligence to virtually anything. Not only to my thermostat, but equally to a teaspoon or a cup of coffee.

Given your bizarre beliefs about what constitutes intelligence, I don't see any way we can come to any agreement or even mutual understanding.

That is my whole point. How long is a string?

One or more bytes for null terminated strings.

AI, like hardware and software development, observes Moore's law

This is completely unsubstantiated nonsense. I think we're done here.

Re:Still the future? (1)

medv4380 (1604309) | more than 3 years ago | (#35152588)

Thank you Narcc you actually grasp the point.

To mswippingboy

I do in fact hold a narrow view of what Intelligence is but it is deceptively simple. You must be able to learn as you go in order to be intelligent. Lets take something that is simple like Tic-Tac-Toe. I have an abstract move that works against most humans if they've never seen it before and just about all AI. If I use it against a human and it works once I might be able to get 1 or 2 wins out of it but after that I will be blocked every time. If I use it against an AI and it works once it works every time. Even the fancy Neural Networks seem to fall for it. Sometimes it doesn't work but that's because in those cases the programmer went in and put in that one weird abstract case so that it wouldn't fall for the trick ever again. The AI is only as intelligent as the last time the programmer messed with it.

That is exactly what happened with Deep Blue as well. Kasperov was using a trap to trick the AI into making mistakes and he won two games with the same trick. In between games the programmers went in and changed the programming so that it wouldn't fall for the trick again. What they did proved that they had failed. Deep Blue the best chess AI on the planet was incapable of doing what Grand Master Chess players are all capable of doing and that is learning from a past mistake. If Kasperov had used the same trap 2 3 or even 4 times against a Grand Master Kasperov would have lost because the Human can "Reprogram" himself but Deep Blue was totally and utterly incapable of doing that one simple task. Making it so that Deep Blue can't even pass the Turning Test because a human given a bit of time would find its weakness in its play and since it would never learn from its mistakes would look fake (artificial) and repetitive. A human would try to change their play style to find a way around the tricks win or lose.

Is this magical? Probably not, I just haven't seen an AI actually do it yet. I think Bottom Up AI's would have a better chance at meeting this requirement long before a Top Down AI ever will.

Re:Still the future? (1)

tehcyder (746570) | more than 3 years ago | (#35148636)

In the case of you thermostat, yes, I would consider it to have a certain level of intelligence. /quote. Again, you are redefining words so that they become meaningless.

Re:Still the future? (1)

JimFive (1064958) | more than 3 years ago | (#35151826)

In the case of you thermostat, yes, I would consider it to have a certain level of intelligence.

Statements like this is why it is hard to take AI fanatics seriously. This statement reveals a form of anthropomorphism at best or an attempt at definitional ambiguity at worst. A thermostat is not intelligent. A thermostat (old style) takes advantage of materials science to make a switch close. Modern thermostats take other inputs into account (such as time of day or pricing information). However, the thermostat is not deciding(1) to change the temperature. It is opening or closing a switch based on a logic table or algorithm.

Further down in the discussion there was this exchange:

We will have true Artificial Intelligence when machines can do things that human beings do, not single cell organisms.

Such as? How about Chess, parallel parking, cross country driving? All things humans can do, now being done using AI.

None of these really exemplify AI.

  • Chess: See above. The chess engine does not decide what move to make. Chess was originally attacked as an AI problem however the solution that ended up being adopted is not a solution that utilizes artificial intelligence: all decisions are made by the programmer.
  • Parallel parking: This doesn't seem to need AI either. If the sensor input indicates that the car can fit in the space and the space is empty this is a controls problems.
  • Cross country driving: I don't realy know much about this one. Is the car making decisions or are the decisions pre-made?

I am open to suggestions that the driving problems are harder than I imagine.

As you can probably tell my requirements for AI are slightly different than yours. AI needs to make decisions (not just implement pre made decisions) on incomplete data and AI needs to learn.

Now, the field of AI is a little different than AI itself. The field of AI researches things that are thought might lead to AI. Expert systems, decision trees, evaluation functions are all the fruits of AI research, but they are not AI. While currently computer vision and controls seem to be the glamourous parts of AI research I think that Machine learning, Natural Language Processing and knowledge representation are more likely to lead to advances toward AI.

--
JimFive
1) Deciding in this case is used in a narrow sense requiring freedom of action.

Re:Still the future? (1)

mswhippingboy (754599) | more than 3 years ago | (#35153012)

Statements like this is why it is hard to take AI fanatics seriously.

I doubt many of us that have actually worked in the field of AI really care whether you take it seriously or not. The point about intelligence that, try as I might, I can't seem to get across here is that intelligence is the measurement of the amount of reasoning or decision making capability an entity has. There is no concept of "this is intelligent while this is not" but rather the concept is "this is more intelligent than that". If an entity has the ability to receive input and based on that input make a decision, it is exhibiting intelligent behavior. Is it exhibiting "human level" intelligence? Of course not. Humans possess the most advanced level of intelligence that we are aware of, but that intelligence is the result of millions of neurons with billions of interconnections, each with their own ability to receive stimulus and produce behavior. It is the aggregate effect of these smaller units of intelligent behaviors that constitute what we think of as human level intelligence.

However, the thermostat is not deciding(1) to change the temperature. It is opening or closing a switch based on a logic table or algorithm.

Oh, and I suppose the network of neurons in your brain works on magic. There is no logic or algorithm implemented organically within the cell structures and connections. When you make a decision do you not think there is some underlying set of chemical and electrical processes that occur that generate that decision?

None of these really exemplify AI.

We'll just have to disagree here. You may not want to call a chess program an example of AI, but I'm sorry, it most definitely is. You misunderstand what Deep Blue and it's team of programmers were doing. Obviously if there were just programmers in the back room making all the decisions that wouldn't have needed a multi-million dollar computer. While I agree that having the programmers involved during the games taints the results, the programmers were merely tweaking the algorithms as they encountered new situations their algorithms had not taken into account. The choice of moves was done using AI logic.

It's obvious you haven't thought though either the parallel parking or the cross country driving examples. Parallel parking is a lot harder than you imagine, having to process visual input, make judgments as to how much to turn the wheels, how much to back up or go forward, judging the distance to the curb and the cars in front and back. If if were that easy it would have been made available years ago.

The cross country driving I'm referring to is what MIT, Carnegie Mellon and others have been working on. The goal is a completely automated vehicle capable of navigating through traffic and driving from one end of the country to the other without any human intervention. http://www.sciencedaily.com/releases/2007/11/071105230951.htm [sciencedaily.com]

Re:Still the future? (1)

JimFive (1064958) | more than 3 years ago | (#35153690)

intelligence is the measurement of the amount of reasoning or decision making capability an entity has.

I can agree to using this definition with the caveat that making a decision and reasoning both require freedom of action. A thermostat does not have freedom of action.

When you make a decision do you not think there is some underlying set of chemical and electrical processes that occur that generate that decision?

How humans generate decisions is a red herring. When the thermostat turns on the furnace it is the builder/user of the thermostat that made the decision.

Obviously if there were just programmers in the back room making all the decisions that wouldn't have needed a multi-million dollar computer. While I agree that having the programmers involved during the games taints the results, the programmers were merely tweaking the algorithms

I think you misunderstand. The decisions that I was referring to as made by the programmers are precisely the decisions of what algorithms to use for the evaluation function, how to trim the minmax tree, etc. By the time it gets to the computer the board position is plugged in and the computer makes the determined move. The computer does not decide what move to make.

Parallel parking is a lot harder than you imagine, having to process visual input, make judgments as to how much to turn the wheels, how much to back up or go forward, judging the distance to the curb and the cars in front and back. If if were that easy it would have been made available years ago.

Visual (camera) input isn't necessary, proximity/range sensors would suffice. A human judges distance, an automated car measures it. Decisions about how much to turn/move are likely made by the programmers, not the vehicles computer. The difficulties that prevented this from being available years ago are likely problems with control precision, not decision logic.

RE: cross country driving
Here's where I think the problem is: Let's say you you have a computer that can control a car well enough to drive around the block and down the freeway and it knows the "rules of the road":
Q. Why can't you just tell it to drive to LA?
A. Because it isn't intelligent.
Now, just because a car CAN drive to LA, that doesn't mean that it's intelligent. If you have to program in every possible thing that could go wrong, still not intelligent. If it can adapt to unanticipated situations on the fly then it might be intelligent.
--
JimFive

Re:Still the future? (1)

mswhippingboy (754599) | more than 3 years ago | (#35154242)

This discussion has gone on far longer than it should have, but I just have one more comment.

I can agree to using this definition with the caveat that making a decision and reasoning both require freedom of action. A thermostat does not have freedom of action.

What does freedom of action mean? Unless you ascribe to some spiritual intervention, every decision you make is the result of physical processes occurring within your brain. You can't escape this fact. No matter how you might wish it otherwise, every thought you have is the product of basic electrochemical processes so what you think of as "freedom of choice" is really an illusion. Excluding the effects of quantum mechanics (which have minimal impact at the molecular level anyway), these processes are deterministic. Given enough information about the stucture of your brain (the neurons, synaptic connections, chemical makeup, etc) at any moment in time, your next thought is completely predictable. A thermostat has only a couple of factors that influence when it will cut on or off the furnace (i.e. temperature, setting of the dial) so it's "decisions" are limited to these states. The human brain has millions of neurons and many billions of connections. Add to this the fact that it is an analog system and the complexity becomes mind boggling. Still, the system does exist in the physical and is subject to the same laws of physics as the thermostat.

Re:Still the future? (1)

JimFive (1064958) | more than 3 years ago | (#35155178)

What does freedom of action mean?

Without getting bogged down in philosophical discourse it means the ability to choose otherwise.

what you think of as "freedom of choice" is really an illusion.

I am not going to argue for or against determinism here, but if the above statement is correct then there is no reasoning, no decision making, and no intelligence in the universe.
--
JimFive

Re:Still the future? (1)

tabrnaker (741668) | more than 3 years ago | (#35158976)

With the thermostat example, you're saying that a human is intelligent because it can choose to not lower the temperature at night even though it would save energy while the thermostat would have to always lower the temperature to save energy.

So basically your definition of Intelligence is the ability to choose to act non-intelligently?

Re:Still the future? (1)

JimFive (1064958) | more than 3 years ago | (#35161404)

With the thermostat example, you're saying that a human is intelligent because it can choose to not lower the temperature at night even though it would save energy while the thermostat would have to always lower the temperature to save energy.

A human may choose to not lower the temperature for any number of reasons, including: being cold. Intelligence can react to unanticipated situations, the thermostat cannot.

So basically your definition of Intelligence is the ability to choose to act non-intelligently?

There are two problems with this statement as a statement. The first is ambiguity in the use of the word intelligent, where in the second usage you are using it as a synonym for rational. Intelligence does not require that one always be rigidly rational. The second is that it is a strawman.
--
JimFive

Re:Still the future? (1)

tehcyder (746570) | more than 3 years ago | (#35148620)

Also, the ability to learn is not a prerequisite to intelligence. It's definitely a "nice to have" feature so that the existing intelligence can be enhanced but it's not a requirement. Even an amoeba has a certain amount of intelligence imparted to it through genetics. It has the ability to sense where more nutrition is in it's environment and then use it rudimentary locomotion to move to the area of higher nutritional concentration. It has no ability to learn, but it does exhibit this very basic intelligent behavior.

You are cheating by redefining "intelligence". In fact, no normal person would call an amoeba intelligent, and there would be widespread resistance to calling anything other than human beings intelligent. Chimps, dogs and dolphins have been described as intelligent, but even this is by no means universally accepted.

We will have true Artificial Intelligence when machines can do things that human beings do, not single cell organisms.

Re:Still the future? (1)

mswhippingboy (754599) | more than 3 years ago | (#35149984)

Also, the ability to learn is not a prerequisite to intelligence. It's definitely a "nice to have" feature so that the existing intelligence can be enhanced but it's not a requirement. Even an amoeba has a certain amount of intelligence imparted to it through genetics. It has the ability to sense where more nutrition is in it's environment and then use it rudimentary locomotion to move to the area of higher nutritional concentration. It has no ability to learn, but it does exhibit this very basic intelligent behavior.

You are cheating by redefining "intelligence". In fact, no normal person would call an amoeba intelligent, and there would be widespread resistance to calling anything other than human beings intelligent. Chimps, dogs and dolphins have been described as intelligent, but even this is by no means universally accepted.

I'm not redefining intelligence. My point was that intelligence is a relative term. Compared to a rock, yes an amoeba has intelligence. I don't think there is really any debate (at least in the scientific world) over whether chimps, dogs or dolphins have intelligence. Any animal that has a cerebral cortex (which includes all mammals) has intelligence by even the most strict (scientific) definition. That is not to say that they are all equally intelligent, which was the point I was originally trying to make but apparently was lost. There is no "universally" accepted definition. In the same way that evolution is not universally accepted, that doesn't make it any less factual.

We will have true Artificial Intelligence when machines can do things that human beings do, not single cell organisms.

Such as? How about Chess, parallel parking, cross country driving? All things humans can do, now being done using AI.

Or do you mean by your definition it must be able to do ALL the things a human being can do? Both humans and AI can do advanced calculus, but about 95% of humans would disqualify as intelligent by that standard.

Can you memorize a corpus of millions of interrelated related facts and be able to recall the relationship between any of these within milliseconds? AI can but humans can't. Does that make machines more intelligent than humans? No it doesn't. You're definition of AI is no definition at all because you can't even define "things that human beings do".

Re:Still the future? (1)

tabrnaker (741668) | more than 3 years ago | (#35158924)

Show me an AI that learns to play Chess all on its own starting from nothing but the board and pieces, then I'll consider it intelligent.

Umm, can you point ANY human that can learn chess just starting from nothing but the board and pieces. You do realize that you have to teach humans how to play chess as well.

Show me a human who could do that and i wouldn't call them Intelligent, i would call them psychic!

It's pretty trivial to teach a computer the rules of chess and have the computer build a database of moves on it's own, actually a lot simpler and a lot quicker than having a human do exactly the same thing. It might take a lot of work to come up with a computer that beats grandmasters all the time, but a trivial chess learning program will easily beat more than half the human population, if not closer to 90% or upwards. Nature actually does a lot worse since if you just taught all humans the rules and nothing else they wouldn't get very far, hence the reason that there are books out there that are basically databases of moves and algorithms for humans.

Perhaps the problem is that you think that humans exhibit intelligence which is rarely the case.

Re:Still the future? (1)

Broolucks (1978922) | more than 3 years ago | (#35143954)

"Pathetic" is relative. If you expect human-like intelligence, yeah, the state of the art is pretty pathetic. If you compare it to what we had 40 years ago, though, there's been a lot of improvement.

And there's a lot of research in the field. Don't be silly. It might not be glamorous, but it's there.

Re:Still the future? (2)

Caspin (964414) | more than 3 years ago | (#35143960)

Netflix has sponsored a hugely successful AI competition with a grand prize of 1,000,000 dollars.

Amazon uses AI to determine recommendations

US post office use AI so sort mail more accurately than a human and at insane speeds.

AI can beet any human at just about any game, it is even getting pretty good at Go!

Starcraft AI competition was just fun. Overmind use geneitic algorithms and machine learning to fine tune it's response to Starcrafts various enemies.

Your spam filter uses machine learning to better classify spam.

AI biggest is failure is that once AI has solved the problem it becomes an algorithm instead of machine learning. We do all of the work but get non of the glory.

Re:Still the future? (1)

tehcyder (746570) | more than 3 years ago | (#35148646)

AI biggest is failure is that once AI has solved the problem it becomes an algorithm instead of machine learning. We do all of the work but get non of the glory.

Are you claiming to be an AI then?

Re:Still the future? (0)

Anonymous Coward | more than 3 years ago | (#35144222)

You do know that C3PO was played by a human actor, right?

Re:Still the future? (1)

Xest (935314) | more than 3 years ago | (#35148396)

Time and time again I see this type of comment on Slashdot when a discussion of AI comes along and sometimes it gets modded up, but christ it's so fucking ignorant.

There is strong AI, and weak AI. Strong AI is a long term goal, it's about producing a human like intelligence, or even better, we are nowhere near this, and slagging off the field because we're nowhere near this is like slagging off Phsyics as a field because it hasn't built a complete grand unified theory of absolutely everything yet.

I don't know why there's this magical view that AI is some special field that can jump straight to the end game like any other field, maybe it's ignorant geek fantasy having watched too many films about robots and AI, I don't know, but to slag AI off as a field is stupid.

Why is it stupid to slag AI off as a field? Well, because it's very active, and because it concentrates primarily on what is achievable right now- weak AI. Weak AI suffers in that it produces systems that seem intelligent at first glance, but after a while when the system is understood the magic goes away. Such algorithms are often based on emergence in that the system produces a good result without it necessarily being obvious why at first- but it's when they do understand it that at this point people stop calling it AI and any average joe programmer adds the algorithm to his toolbox. To give examples of the type of things weak AI has given us, well, everything from the software to optimise aircraft and cars, through to computer game opponents, through to grammar check tools in word processors, through to Google search, through to gesture based input recognition and voice recognition, to telephone and network routing algorithms.

To say AI has failed is stupid, it's one of the most succesful computing fields to date, having provided the algorithms behind many things that we take for granted, you'll struggle to get through your day in the modern world without encountering and using the fruits of AI research. Sure it's still a long way off strong AI, but most fields are a long way off their end game too, AI isn't special, or unique in this respect, and it sure as hell hasn't failed because of that.

If you think there's no AI research going on then you're not really qualified to comment on the topic at all, there's been some rather high profile news articles in the mainstream news and here on Slashdot about AI research projects that have been quite hard to miss if you follow technology news over the past few years, such as this:

http://www.forbes.com/2009/11/18/ibm-brain-science-technology-breakthroughs-supercomputer.html [forbes.com]

What about Helen Keller? (0)

Anonymous Coward | more than 3 years ago | (#35143406)

Computers currently have the possibility to learn via text based chat.
Now we say that Computers will have the possibility to learn visually.

Helen Keller could only communicate in a medium which could be considered equivalent to a text based chat.
I don't see why computers can't learn using the same techniques.

frees up the human (2, Interesting)

Un pobre guey (593801) | more than 3 years ago | (#35143408)

If you can get a computer to do [those tasks] then that's a phenomenal saving, and it frees up the human to do something more interesting.

Right. That's what's been happening. Humans have been freed up to do more interesting things, and for more pay, too. Uh huh.
So, the more we make machines do more of the work people do, the more interesting work there is for the rest of us? Those of us who don't own the machines? Those of us who need to make a decent living? Does this guy live on planet earth? Can it be that in 2011 there are still people in decision-making positions who still believe that?

Re: frees up the human (2)

retchdog (1319261) | more than 3 years ago | (#35143698)

well, struggling to live on the street and squatting in flophouses can be considered "interesting;" he didn't say "fruitful."

an ayn rand quote would be all too easy to find, so here's one from the radical left: ""Down with a world in which the guarantee that we will not die of starvation has been purchased with the guarantee that we will die of boredom."

Re: frees up the human (2)

sserendipity (696118) | more than 3 years ago | (#35143958)

Here's an Ayn Rand quote that I have to bring whenever she is mentioned: "I love handouts from the government."

http://www.good.is/post/conservative-darling-ayn-rand-died-loving-government-handouts/ [www.good.is]

Re: frees up the human (2)

retchdog (1319261) | more than 3 years ago | (#35144850)

and it deserves wider circulation. i've always considered objectivism to be a great (if strenuous) personal ideal but horrible social policy, and i'm glad to see that ms. rand agreed.

Re: frees up the human (1)

hitmark (640295) | more than 3 years ago | (#35148026)

Until me reevaluate copyright now that artists no longer have to risk starvation if their art do not bring food to the table.

Re: frees up the human (1)

calmofthestorm (1344385) | more than 3 years ago | (#35143792)

This is a complaint about how wealth is distributed, not a complaint against progress.

Depends on your definition of "progress" (1)

zooblethorpe (686757) | more than 3 years ago | (#35144006)

This is a complaint about how wealth is distributed, not a complaint against progress.

No, I'm serious, and not being snarky -- for many people already in positions of power, "progress" means them getting more [desirable noun]. So while the recent global financial meltdown set many of us back considerably, it has still been deemed as "progress" by the financial elite, at least as I've been reading in the media. For that matter, I've been reading and hearing for over a year now about how the economy is supposedly doing better and better, i.e. "progressing", but I have yet to see my personal situation, or the standings of my friends and relatives, improve in any measurable way. And that's not for lack of working hard...

Cheers,

Re: frees up the human (1)

Un pobre guey (593801) | more than 3 years ago | (#35144150)

Are you equating the elimination of human labor with progress?

Re: frees up the human (1)

Marc_Hawke (130338) | more than 3 years ago | (#35144478)

Your point started off being that 'more interesting = more pay' is not sustainable. That's true. If money is based on any sort of 'scarce' standard, then you'll run out if you increase the pay-grade of everyone phased out by machines. What's really supposed to happen however is that the minimum wage jobs shift, so everyone goes DOWN a pay-grade.

The 'idea' is supposed to be that when machines are doing the menial tasks, like 'farming', then the cost of living will go down for everyone. After removing the human cost, bread should drop to fraction of its previous price, and so the poverty line goes down. I'm not sure how well that will be accepted by people, but that's now the economics are supposed to work. I guess it's best if you think of your pay as a multiple of the cost-of-living index. "Woop, I got a new job, it pays 3x COL, I can buy a new house." (Too bad for the guys who make .8 COL)

But then you quickly changed to say, "There's no way everyone can have an interesting job." That's patently ridiculous. There were no computer programming jobs in 1950. (Well, as we know them.) That means all the computer programmers you know now are were doing something else back then. Probably doing something that is now being done by machines because it wasn't as "interesting."

Sure, if a machine takes your job, then it sucks for you, and you can't make a living doing the same thing. However, saying that there's nothing else for you to do is false, and probably defeatist or possibly lazy. I hesitate to say lazy because the transition could take a generation or two. So a specific individual could be royally screwed, but his offspring will almost definitely find one of the new "interesting" jobs....but perhaps for not much more pay.

Re: frees up the human (2)

Un pobre guey (593801) | more than 3 years ago | (#35144624)

You're simply repeating the standard naive argument from 60 or more years ago. Eliminating "menial" labor, more commonly called "blue collar jobs," is neither scalable nor survivable. Those people will not become engineers, scientists, professionals, or "white collar" employees as your model will effectively require. While many products and services may diminish in price, a great many people will become under- or unemployed. The poverty line will go up, not down. Beware of simply accepting pop-culture notions of capitalism, they are wrong. Many counter-intuitive results will come from making machines do all the work. Those who don't own robots will be increasingly unable to participate in the economy.

However, saying that there's nothing else for you to do is false, and probably defeatist or possibly lazy
This is a gratuitous conjecture. If you are shifting all unskilled work to machines then there will in fact be nothing else for unskilled workers to do, by definition. They are not lazy, they are displaced. Eliminating all possible forms of human labor that can be more cheaply performed by machines, and doing so as quickly as possible is beyond foolhardy. It invites cataclysm.

Sure, if a machine takes your job, then it sucks for you
This would be funny if it weren't so preposterous and sociopathic. Don't worry, your turn will come.

Re: frees up the human (2)

icebraining (1313345) | more than 3 years ago | (#35145380)

How is automation any different than what's already happening with outsourcing? 70-80% of the GDP of many western countries already comes from the services sector.

Re: frees up the human (0)

Anonymous Coward | more than 3 years ago | (#35145790)

I'm seriously not being facetious. People doing the interesting, high-paying jobs will always want paid personal servants and service from menial service industries, and since everything will be more abundant, they'll be able to pay much more in terms of real purchasing power. Replacing manual labor with robotics is absolutely, unequivocally a good thing for everyone.

Re: frees up the human (1)

SwedishPenguin (1035756) | more than 3 years ago | (#35148394)

Personal service to the few very wealthy will not be enough to replace all the jobs lost, especially as services jobs become automated as well. -The automated checkout lines that exist today is really just the beginning for the service industry, it's not going to stop there..
I do think that replacing manual labor with robotics is a good thing, but in our present economic system, it spells disaster for a huge portion of the population.. We need to make sure that the increased productivity caused by automation is evenly distributed, not concentrated in the hands of the few already very wealthy who happen to own the businesses that become automated.

Re: frees up the human (1)

Un pobre guey (593801) | more than 3 years ago | (#35152732)

The AC you are responding to is one of the many, many people who won't get it until it is far too late.

Re: frees up the human (1)

tehcyder (746570) | more than 3 years ago | (#35148692)

You are overlooking the fact that under a proper socialist system the wealth created by the worker robots would be shared amongst the entire population, not reserved for rich people who privately owned the means of production.

This is because you can only conceive of a capitalist system where you are defined by a combination of the money you have and the "productive work" you do.

If most people no longer have jobs, so fucking what? Here, you are biased by some version of the protestant work ethic, where work is seen as a good thing in itself, whereas for most people it is just an unavoidable drain on their time an energies, which could be better used elsewhere..

Re: frees up the human (1)

Un pobre guey (593801) | more than 3 years ago | (#35152996)

You are so off the mark it is almost cute. First and foremost, the trend towards replacing human labor with robots is occurring under a corporatist plutocratic regime, not some ill-defined "socialist system." The fruits of the robotic labor serve the plutocracy, who own the means of production. This will not change in the foreseeable future.

This is because you can only conceive of a capitalist system where you are defined by a combination of the money you have and the "productive work" you do.
Now that's just ridiculous 1990s flame-war claptrap. The notion that avoidance of all "productive work" is a worthy goal for humanity is simple-minded and childish. Nobody is claiming that in the current system you are "defined by" your job. You are raising shallow, pointless issues. This has nothing to do with "the protestant work ethic." The vast majority of the human population is not protestant, and most of them work, often starting as children. If you believe that work is "unavoidable drain on their time an energies, which could be better used elsewhere," then you really have no imagination and lack basic insight into human society and history, not to mention the very nature of "work" itself.

You have put forth no compelling arguments in favor of the elimination of human labor by machines without providing any alternatives for those displaced by them.

Re: frees up the human (2)

Beezlebub33 (1220368) | more than 3 years ago | (#35150470)

You're simply repeating the standard naive argument from 60 or more years ago. Eliminating "menial" labor, more commonly called "blue collar jobs," is neither scalable nor survivable. Those people will not become engineers, scientists, professionals, or "white collar" employees as your model will effectively require. While many products and services may diminish in price, a great many people will become under- or unemployed. The poverty line will go up, not down. Beware of simply accepting pop-culture notions of capitalism, they are wrong. Many counter-intuitive results will come from making machines do all the work. Those who don't own robots will be increasingly unable to participate in the economy.

You haven't given me any reason to think that his 'naive' argument is not correct other than you saying so. Is life worse now than 60 years ago because automation has replaced people in a number of menial tasks? I don't see it. I can buy an iPod for a small amount of money because the factories that create them are largely automated, and the ships that transport them from there to here are largely automated, and the packaging and delivery system is largely automated. The same applies to food, and clothing, and housing. In pretty much every aspect of life, bits and pieces are automated and function semi-autonomously. Each of those is denying a person a job.

When you get on an elevator and push a button, you have denied a job to the elevator driver. But the elevator driver is not starving in the street because they got another job. It was a stupid job that did not require a human and automation of that task has made life better for everyone. It sounds like you are upset because we use front end loaders rather than having a team of people dig ditches, and now all those ditch diggers are penniless. It doesn't work that way.

If, and this is a huge if, almost all tasks that required human intervention for 'menial' tasks was taken over instantly by robots, then yes we would have a problem. A huge percentage of our workforce would suddenly not have a job, we'd have social unrest, etc. But the way that it's been happening for the past couple hundred years is that the automation has been creeping, slowly replacing tasks. Yes, people who used to work looms are no longer needed with modern cloth manufacturing, but people shift and move and retrain. There is not an additional 1% permanent underclass of loom workers out there.

Are there huge numbers of unemployed buggy whip manufacturers? No, of course not, because economics works such that those people go and do something else. Can you name 3 menial tasks that have been replaced by automated processes that have caused long term increases in unemployment? One?

Re: frees up the human (1)

Un pobre guey (593801) | more than 3 years ago | (#35153102)

You have in effect answered your own question with an attempt to qualify your claims:

If, and this is a huge if, almost all tasks that required human intervention for 'menial' tasks was taken over instantly by robots, then yes we would have a problem. A huge percentage of our workforce would suddenly not have a job, we'd have social unrest, etc. But the way that it's been happening for the past couple hundred years is that the automation has been creeping, slowly replacing tasks. Yes, people who used to work looms are no longer needed with modern cloth manufacturing, but people shift and move and retrain. There is not an additional 1% permanent underclass of loom workers out there.

The unemployment problem facing the US today is structural, due partly to automation and outsourcing, as well as to the integration of international trade and concomitantly of international labor costs. Large scale automation can only devalue labor further in an already unfavorable environment, at least for US workers. There is at least a "1% permanent underclass" not of loom workers but of people being displaced by these dynamics. Automation is only a part of it to be sure, but it will experience rapid growth over the next few decades. In my personal opinion, it will be the dominant factor within ten years or so. Lucky for you, this post will probably still be easily available online at that time. I personally challenge you to throw it in my face on Feb 9, 2020 and legitimately claim that I was wrong.

Re: frees up the human (1)

SwedishPenguin (1035756) | more than 3 years ago | (#35148274)

I think we need to think about radically reforming the economy as automation becomes more and more common. Eventually only creative work will be available to humans, and while creative work is great I doubt it can provide enough jobs for the entire population. If we don't do something radical to make sure everyone shares in the fruits of the increased productivity of society, we will have a huge permanently unemployed underclass, some middle to upper class workers and a few massively wealthy owners of the automated factories/services etc.
I'm very interested in machine learning and automation, I'm doing my master's in that field, but I'm also concerned about what will happen to our society.

Re: frees up the human (1)

Beezlebub33 (1220368) | more than 3 years ago | (#35150646)

Why has this not happened in the past? There has already been an enormous shift in the types of work that people do. Previously, most people did menials tasks on farms, and now they do not. After farming was manufacturing, and that is largely automated (though not entirely obviously, cf China). What are people doing now that they did not do then? How did that shift happen such that we did not have 50% unemployment? What does that imply for the future? I think that it means that people will still be people and do 'jobs' that are more abstract, interpersonal, and less directed at manufacturing. Does that imply 'creative' work? I'm not so sure, since that hasn't happened in the past.

The logical extension of the 'robots take over pretty much everything' is science fiction. As usual, science fiction authors have thought about this and written stories about it. I can't remember the name, but there was a Ray Bradbury story about this (can anyone help me out here?) and some of the Asimov stories apply too. If you really are concerned about what will happen to society, read some science fiction. They have at least thought about it.

Re: frees up the human (1)

Un pobre guey (593801) | more than 3 years ago | (#35153266)

Why hasn't it occurred? Because powerful computing hardware has never been so cheap and abundant. That is the new, disruptive change. It still growing by leaps and bounds. You can already buy cards with 64 cores running linux [tilera.com] and put them in your PC or robot. Mobile devices are already going multicore [linux-mag.com] . Distributed machine learning [ieee.org] is already a reality. Those things did not exist before. and that is why there hasn't been 50% unemployment due directly to automation. Forget Asimov and Bradbury, they did not foresee it. Try Marshall Brain [marshallbrain.com] instead.

Insect Brains (1)

jomama717 (779243) | more than 3 years ago | (#35143414)

If I could go back and do it all over again I think I would spend my entire life trying to figure out how a mosquito's brain works. There must be research along these lines happening somewhere but you never hear about it - they are always trying to map out mouse brains, or some other small mammal.

Why so ambitious? Start small - if a computer program could be made that perfectly imitates a mosquito it would be a huge breakthrough.

Re:Insect Brains (1)

vlm (69642) | more than 3 years ago | (#35143486)

If I could go back and do it all over again I think I would spend my entire life trying to figure out how a mosquito's brain works. There must be research along these lines happening somewhere but you never hear about it - they are always trying to map out mouse brains, or some other small mammal.

If the assumption is once you're done with this project, you'll move on to human brains, then a mosquito has some pretty severe I/O differences compared to a typical mammal human brain.

Re:Insect Brains (1)

jomama717 (779243) | more than 3 years ago | (#35143796)

I'd go from there to an ant brain. Maybe a mosquito is even too ambitious - I wonder if anyone has tried to simulate the nervous system of a sea sponge, or an earth worm...

Re:Insect Brains (1)

Beezlebub33 (1220368) | more than 3 years ago | (#35150956)

The best work I know of is on the sea slug, Aplysia californica. Do a google search for neural system, simulations or models of it. A lot of work has gone into determining the types and connections for every neuron in the organism, where they came from developmentally, what they do and how they work. There are about 19,000 neurons. We do not have a complete model for it yet. So, we're a really, really long way from doing a mosquito brain, though I'm not sure how many neurons they have. Honeybees have about 1 million and we have barely started thinking about modeling something like that. It's ridiculous how hard it is.

BTW, there was lots of noise about modeling a 'cat's brain' by IBM a little while ago. It didn't model a cats brain however. It was a simulation of the same number of neurons as in a cat's brain. The difference is that the model they used did not model the connections or learning that occurs in a real brain. It was just X number of neurons in a big soup. The real hard work is determining what the connections are between all the neurons, what the connections do, and what the dynamic nature is. That's why there is a big connectome project going on (Google that too).

Re:Insect Brains (1)

tabrnaker (741668) | more than 3 years ago | (#35159010)

Could be wrong, but i do recall that the they have mapped out an earth worms nervous system, a bit of a stretch to call it a brain though.

Ant's aren't easy because you can't really view them as individuals.

Re:Insect Brains (2)

Skarecrow77 (1714214) | more than 3 years ago | (#35143504)

I swear I remember reading that IBM had already simulated an AI "Intelligence" with sophistication on par with a cat's brain, albeit not at full speed.

Yep, here it is (one of many articles on the subject I picked at random)
http://www.technewsworld.com/story/68678.html [technewsworld.com]

Re:Insect Brains (1)

WrongMonkey (1027334) | more than 3 years ago | (#35143652)

What they did was a simulation of a neural network with same number of neurons as a cat cortex. The cortex is only part of the brain and just simulating a bunch of neuron isn't the same as simulating the functionality. That's still a long, long way from being on par with an actual cat brain.

Re:Insect Brains (0)

Anonymous Coward | more than 3 years ago | (#35143782)

Yeah, I doubt their simulation even CONSIDERED asking if it could has cheezburgers.

Re:Insect Brains (0)

Anonymous Coward | more than 3 years ago | (#35143922)

and they didn't emulate the fur and the purr.. the most important part!

Re:Insect Brains (1)

jomama717 (779243) | more than 3 years ago | (#35143734)

I remember this story - the catch was that they simply (ha) set up a software brain simulation which had enough numbers of neurons and synapses (from the article: 760 million, 6 trillion, respectively) to put it above cat-scale, but as far as I can tell no actual attempt was made to virtually render a living brain in a computer.

The brain of an ant contains a mere 250K neurons, seems like it would be a cake walk after the cat-scale exercise :)

More animal neuron counts [wikipedia.org]

Re:Insect Brains (1)

Beezlebub33 (1220368) | more than 3 years ago | (#35150990)

Correct, running the simulation of the ant brain would be a cake walk, assuming you knew what to put in the simulation :-)

And determining what the model should be, ah..there's the rub. How do all of those 250k neurons connect? What do they do? That's a hugely hard problem.

Re:Insect Brains (1)

jomama717 (779243) | more than 3 years ago | (#35152610)

Thanks for both of your replies - interesting stuff. Someone else posted this link [ieee.org] in reply, it goes into pretty fine grained detail about efforts to model a fruit fly brain, pretty fascinating. The article points out that in addition to the extreme complexity of the actual connections, it is even more complex in that "firings" of the neurons aren't simple on/off firings, they can each fire at at different percentages. It also goes into the storage space required to store what they find - it's staggering.

Re:Insect Brains (1)

Palpatine_li (1547707) | more than 3 years ago | (#35144046)

you must have never heard of Janelia Farm, the home base of HHMI foundation, where they have a huge project of reconstructing fly brain http://spectrum.ieee.org/biomedical/ethics/reverse-engineering-the-brain [ieee.org]

Re:Insect Brains (1)

jomama717 (779243) | more than 3 years ago | (#35145388)

Indeed I've never heard of it - thanks for the link!

Re:Insect Brains (1)

Internalist (928097) | more than 3 years ago | (#35145938)

I believe what you're looking for are The Fly Papers [princeton.edu] ; research by Bill Bialek and various co-authors which date back almost a decade. For a great overview, check his book, Spikes.

Re:Insect Brains (1)

jomama717 (779243) | more than 3 years ago | (#35152694)

Wow, thank you. I should have been a little more diligent in my googling on this topic, or at the very least posted something about it on slashdot earlier :)

In all of the links to various attempts at modeling "simple" brains that I've been given it becomes clear that even the most rudimentary of brains requires an incredible amount of time and effort - in addition to massive amounts of storage space. I still believe that this path is the one that will eventually lead us to true AI and an understanding of human consciousness, albeit not for another hundred years or so.

Re:Insect Brains (0)

Anonymous Coward | more than 3 years ago | (#35159194)

Actually, this probably will not be the path that leads us to true AI as long as they're trying to copy a developed brain.

The most interesting thing about the brain is that it basically comes out with large amounts of random connections. These connections are whittled down and re-inforced by external stimulation. What we need to understand are what the low level units of the brain are. Even this won't be enough for several reasons, AI is usually developed along one vector. When we do choose to try and give the computer more than one sensory input we come up against the problem that we really don't expose the system to reality, but to how our higher level functions organize reality. For example, objects don't exist in the world, there is no such thing as a straight line. At the low level, we perceive reality as pure movement. If you would like to experience your feature detectors in the visual system just drop some lsd-25, all the whorly shapes coming at you have feature detectors specialized for them.

More important is not how the information gets acquired and organized, but what is it exactly that determines how attention is directed and how the brain is used. I can use my visual center to organize information from the outside world or i can choose to close my eyes and use my visual center to process a spatial map in my head, or use it to process math, it's been a while but i think logic might use the visual center as well (been a while since i've been out of the field and haven't kept up with the research).

Complete ignaramus makes meaningless prediction (0, Flamebait)

Anonymous Coward | more than 3 years ago | (#35143484)

And how, exactly, do they think augmented reality works? They think there's no Bayesian logic or other machine learning (k-means, etc.) in modern image recognition? Is the idea that there's a tiny man in your phone?

Seriously, anyone who could write such trash has no clue what the state of tech is, and certainly has no business predicting its future.

AR can be useful... (1)

ruggard (1989464) | more than 3 years ago | (#35144364)

I easily say what I look forward to, and it will come from a combination of machine learning, human input, structured and unstructured information: the ability to look at something and know how it works, what it's made of, where it came from, who's involved with it. I mean, not having to google/wikipedia every interesting aspect, but having it show up translucently in front of what you're looking at.

This would be especially interesting for complex things like computers, electrical devices, organisms.

I'm looking forward to sub-$300 quality tablet devices to start working on my own version.

Stop them before it's too late! Or not... (0)

Anonymous Coward | more than 3 years ago | (#35144546)

...but the next step to making it truly useful will be when it starts to use elements of machine learning to understand the real world,

Actually, if we've learned anything from Hollywood sci-fi movies, it's that we know the second after when the machine comes to fully understand the real world is when the nukes get launched.

And I know this will happen because almost every sci-fi movie which depicts a dystopian future that I saw in my childhood have elements that are coming to fruition within my lifetime.

OSaware.com bitch (1)

Jah Shaka (562375) | more than 3 years ago | (#35145174)

We reverted to BASIC and decided to start over

Welcome Slashdot to Autonomy Vapor PR (0)

Anonymous Coward | more than 3 years ago | (#35146398)

Oh... I can't believe the editors fell for this crap. Typical MO for Autonomy PR. Call me when they actually do something new instead of buying companies to pretend they are still relevant.

Re:Welcome Slashdot to Autonomy Vapor PR (0)

Anonymous Coward | more than 3 years ago | (#35147060)

Go through the Autonomy wikipedia page edit history on wikiscanner. Lots of objective editors, all with IP's from Cambridge, UK.

Load More Comments
Slashdot Login

Need an Account?

Forgot your password?

Submission Text Formatting Tips

We support a small subset of HTML, namely these tags:

  • b
  • i
  • p
  • br
  • a
  • ol
  • ul
  • li
  • dl
  • dt
  • dd
  • em
  • strong
  • tt
  • blockquote
  • div
  • quote
  • ecode

"ecode" can be used for code snippets, for example:

<ecode>    while(1) { do_something(); } </ecode>