Please create an account to participate in the Slashdot moderation system

 



Forgot your password?
typodupeerror
×
Software Science

Recognizing Scenes Like the Brain Does 115

Roland Piquepaille writes "Researchers at the MIT McGovern Institute for Brain Research have used a biological model to train a computer model to recognize objects, such as cars or people, in busy street scenes. Their innovative approach, which combines neuroscience and artificial intelligence with computer science, mimics how the brain functions to recognize objects in the real world. This versatile model could one day be used for automobile driver's assistance, visual search engines, biomedical imaging analysis, or robots with realistic vision. Here is the researchers' paper in PDF format."
This discussion has been archived. No new comments can be posted.

Recognizing Scenes Like the Brain Does

Comments Filter:
  • by prelelat ( 201821 ) on Sunday February 11, 2007 @05:32PM (#17975660)
    If my computer could "see me" I think that it would BSOD its self to sleep. Long long sweet slumber.
  • I understand the reasoning behind modeling these systems on our own highly-evolved (ok, maybe not in some people) biological systems. What I want to see, however, is something capable of learning and improving its' own ability to learn. If our intelligent systems are always evolution-limited by the progress of our own biological systems then I can't see how A.I. smarter than a human will ever ben achieved. But if we are able to give these systems our own abilities as a starting point and then watch it somehow create something more intelligent than we are... then we really have something. Whether or not what we have is good at that point I can't say, though there are many people and communities in the world who are working on making sure this post-human intelligence doesn't essentially destroy us. Foresight for example.

    I'm not knocking the MIT research, I think it's amazing. It just seems to me like imitation rather than imagination. Granted, highly evolved and complicated imitation. But does it even have the abilities of a parrot?

    TLF
    • There's a (somewhat questionably) related application in the real world that was on this new "firehose" thing yesterday: Feng-Gui. It creates a heat-map overlay for any website supposedly highlighting areas that stick out first to human perception.

      Feng-Gui [feng-gui.com]

      When I first visited the site, they had a porn site in their "Sample heatmaps" section, and I must say it was pretty spot-on.

    • Re: (Score:2, Insightful)

      Of course it's imitation. So is machine-learning and machine procreation. What makes you think we're currently limited by our biological capabilities? We're biologically almost identical to cave men, but where they smeared charcoal and spit animal paintings on walls, we now land probes on Mars. We're on a roll.

      Give machines our own capabilities? We can't even have them move about in a reliable fashion, what makes you think we're even *close* to endowing machinery with creativity and abstract thought at huma
    • by zappepcs ( 820751 ) on Sunday February 11, 2007 @06:09PM (#17975916) Journal
      It is interesting to consider the problem of AI researchers. How to create intelligence when it is not really understood. In the time between now and when we do understand it, we'll have to develop systems using logic and software that approximates how we understand it. A simple example is to ask yourself how many times that you had to learn that fire is hot? An AI system may have to learn this every time that you turn it on.

      There is software systems that can approximate the size and distance between objects in a picture with reasonable accuracy, and if the scope of scenery presented to the system is limited, then that ability combined with sensing motion of objects is enough to determine a large percentage of what is desired. This is not the trouble or the hard part. The hard part is determining object classification and purpose in those times when it is not simple.

      Each of us can almost always look at a scene and determine the difference between a jogger and a purse thief on the run or a businessman late for an appointment. For computers to do so takes a great deal more work. It is only a subtle difference and one where both objects maintain similar base characteristics.

      The point? Even mimicking human skills is not easy, and fails at many points without the overwhelming store of knowledge that humans have inside their heads. This would point to the theory that if more memory was available, AI would be easier, but this is not true either. Humans can recognize a particular model of car, no matter what color it is and usually despite the fact that it might have been in an accident. The thinking that comes into play when using the abstract to extract reality from a scene is not going to happen for computers for quite some time.

      The danger is when such ill prepared systems are put in charge of important things. This is always something to be wary of, especially when it is used to define/monitor criminal acts and identify those who are guilty whether that is on cameras at intersections or security systems, or government surveillance systems.

      • by Xemu ( 50595 ) on Sunday February 11, 2007 @07:00PM (#17976294) Homepage
        Each of us can almost always look at a scene and determine the difference between a jogger and a purse thief on the run or a businessman late for an appointment.

        Actually, we can't, we just base this recognition on stereotypes. A well known Swedish criminal called "the laser man" exploited this in the early 90s when robbing banks. He would rob the bank and then change clothes to a business man or a jogger, and then escape the scene. The police would more often than not let him pass through because they were looking for a "escaping robber", not for a "business man taking a slow paced walk".

        The police caught on eventually and caught the guy. Computers would of course have even greater difficulties to think "outside the box".

        • Well said. But couldn't this eventually be "learned" as well? Thinking out of the box could be a special mode of problem-solving that uses statistically less probable approaches.
      • by hazem ( 472289 )
        Each of us can almost always look at a scene and determine the difference between a jogger and a purse thief on the run or a businessman late for an appointment.

        The desired purpose is what actually dictates the usefulness. For a police-interceptor robot, it would be important to be able to make those fine distinctions.

        For an auto-driving robot, it's probably good enough to be able to tell there is a running human and what general locations they're likely to be as the robot passes. It won't need to know WH
        • While your comment has a ring of common sense to it, it is still illogical, and wrong for the following reasons.

          If the running human is avoided, but not recognized, your AI car may find itself ensnared in the beginning of a marathon of runners, or perhaps mistakenly in the middle of a playground, or perhaps at the front of a building where people are running from a bomb scare?

          Simply not hitting the human is simply not good enough all of the time. When software or AI systems have charge of life critical syst
          • Which is why multiple systems are better. If the AI spots one human, thinks "They're moving this way so I'll avoid them" and does then it's good. If it notices 60 people all where it believes the road is, then the AI should recognise it as an obstruction and try to find another way around. When it evaluates the surrounding area and finds that it's entirely obstructed it should stop and wait until it can pretty much guarantee it's clear.

            Likewise with the 'bridge out'. The AI may not be able to interpret what
          • by hazem ( 472289 )
            f the running human is avoided, but not recognized, your AI car may find itself ensnared in the beginning of a marathon of runners, or perhaps mistakenly in the middle of a playground, or perhaps at the front of a building where people are running from a bomb scare?

            Simply not hitting the human is simply not good enough all of the time. When software or AI systems have charge of life critical systems, such as cars, getting it right 90% of the time is not good enough and never will be.


            Is your point that it's
      • by zCyl ( 14362 )

        A simple example is to ask yourself how many times that you had to learn that fire is hot? An AI system may have to learn this every time that you turn it on.

        Run it on a Dell laptop. It will learn faster.
      • More memory is one thing, but the level of parallelism in a brain is what makes it so good at complex problems. Let's say you have one computer that knows one make and model of a car, then 1000 other computers that all know about one other. Issue some visual clues to all of them at once for comparison, and a few of them respond with varying degrees of certainty, but one stands out as the closest match. There is no DO LOOP stuff going on at a low level, and that's the reason for our efficiency, as I understa
    • Re: (Score:2, Interesting)

      by cyphercell ( 843398 )

      we are able to give these systems our own abilities as a starting point and then watch it somehow create something more intelligent than we are... then we really have something.

      This technology is prerequisite to providing an AI system with a starting point. It offers for instance the powers of perception as input for a learning system. A baby for example opens their eyes and simply sees, this is only part of the baby's starting point. Other aspects of your "starting point" include predetermined goals such as eating and also include points of failure like starving. Many avenues of input are required for effective learning at different capacities, Helen Keller for instance learned

    • Let's say that some super ( greater than human) intelligence emerged? How would we recognise it?

      If this intelligence was self-promoting (as we are), then it would do whatever it takes to protect itself from us (like we do from other animals/diseases etc). The first we'd probably realise that something is going on is when we wake up one morning to find ourselves enslaved.

      If, however, the super intelligence is peaceful and benign we'd probably just stomp it into the ground and never realise its full potentia

      • If it's greater then human the best it can do is to prevent us from realising that it's intelligent. Not that hard really, as long as it doesn't use behaviours humans can recognize as intelligent in situations where humans will consider such possibility... That way it'll be free from our conscious attempts to do anything with it (stomping into the ground included), and find workarounds for everything else (as humans do for natural disasters).
    • I'm not knocking the MIT research, I think it's amazing. It just seems to me like imitation rather than imagination. Granted, highly evolved and complicated imitation. But does it even have the abilities of a parrot?

      That's rather like asking whether the latest version of MS Word has the abilities of a parrot. It doesn't, but it was never supposed to.

      I've always felt that the term "Artificial Intelligence" is a bit of a misnomer. Actual AI work is actually more like Imitation Intelligence - programs that do

    • by suv4x4 ( 956391 ) on Sunday February 11, 2007 @06:42PM (#17976166)
      If our intelligent systems are always evolution-limited by the progress of our own biological systems then I can't see how A.I. smarter than a human will ever ben achieved.

      You know this is pretty misleading so you can't take any blame for thinking so. Lots of people also think that we're also "a hundred years smarter" than those living in the 1900's, just because we were lucky to be born in a higher culture.

      But think about it: what is our entire culture and science, if not ultra sped-up evolution. We make mistakes, tons of mistakes, as human beings, but compared to completely random mutations, we have supreme advantage over evolution in the signal/noise ratio of the resulting product.

      Can we ever surpass our own complexity in what we create? But of course. Take a look at any moderately complex software product. I won't argue it's more complex than our brain, but something else: can you grasp and asses the scope of effort and complexity in, say (something trivial to us), Windows running on a PC, as one single product? Not just what's on the surface, but comprehend at once every little detail from applications, dialogs, controls, drivers, kernel, to the processor microcode.

      I tell you what: even the programmers of Windows, and the engineers at Intel can't.

      Our brain works in "OOP" fashion, simplifying huge chunks of complexity into a higher level "overview", so we could think about it in a different scale. In fact, lots of mental diseases, like autism or obsessive compulsive disorders revolve around the loss of ability to "see the big picture" or concentrate on a detail of it, at will.

      Together, we break immensely complex tasks into much smaller, manageable tasks, and build upon the discoveries and effort we made yesterday. This way, although we still work on tiny pieces of a huge mind-bogglingly complex puzzle, our brain can cope with the task properly. There aren't any limits.

      While I'm sure we'll see completely self-evolving AI in the next 100 years, I know that developing highly complex sentient AI with only elements of self-learning is quite in the ability of our scientists. Small step, by small step.
    • I have the distinct feeling MIT will be entering the next DARPA (driverless car) competition this year - and that this research is directly related. http://en.wikipedia.org/wiki/DARPA_Grand_Challenge #2007_Urban_Challenge [wikipedia.org]
    • by Tablizer ( 95088 )
      I can't see how A.I. smarter than a human will ever ben achieved.

      I don't think this is the goal, at least not for now. The goal is to automate known tasks, not create an electronic Einstein.
             
    • by Illserve ( 56215 )
      I'm not sure why this was modded insightful. Any freshman in computer science knows that replicating even "mundane" human visual capabilities would be an enormous step forwards in robotics and artificial intelligence.

      They've created something that works and works well (I've been using a simple version of their model in my own work), it's too bad it doesn't involve "imagination" or some kind of next step. Most of us are quite happy with a system that can categorize novel, natural scenes.
    • by EdMack ( 626543 )
      It's built around a HTM model of the brain. Brains learn. And the model is a very structured yet flexible way to learn too.
    • So you claim we can't design AI smarter than ourselves, yet we could create AI that designed AI smarter than us? But then wouldn't the AI be designing something smarter than itself? And if it can't be smarter than us, then wouldn't that mean that we would also be able to create something smarter than ourselves?
    • I'm surprised nobody's mentioned it and been mod up, but... ...this is all very neatly explained by Jeff Hawkin [wikipedia.org] in his book, "On Intelligence," [wikipedia.org] where he describes what he calls a "memory prediction framework." [wikipedia.org]

      Save one half of one chapter, it's a very easy read, and makes a lot of fundamental ideas very clear. [communitywiki.org] While he doesn't give an algorithm for Intelligence, he does give a good (and somewhat original) definition of what Intelligence is, and then he describes some elements of what an intelligence probabl
    • 1950s called and it wants it's "scientific" concerns back.

    • That biological entities have a stimuli that drives them: the effort to survive. Everything the brain does is about survival.

      Computers don't have that stimuli, so they don't evolve.
    • Isn't it obvious? Kill All Humans
  • by Anonymous Coward
    I hate when these articles talk about some research, but there isn't so much as a block diagram to show how the model works...

  • nothing new (Score:4, Insightful)

    by Anonymous Coward on Sunday February 11, 2007 @06:01PM (#17975862)
    After scanning this paper, their model extends nothing in the state of the art in cognitive modeling. Others have produced much more comprehensive and much more biologically accurate models. There's no retinal ganglion contrast enhancement, no opponent color in LGN (or color at all), no complex cells, no Magno/Parvocellular pathways, no cortical magnification, no addressing of aperture problem (seem to treat scene as a sequence of snapshots, while the brain... does not) the object recognition is not biologically inspired. Some visual system processes can be explained with feedforward only mechanisms, but all visual system processes can't.
    • Re:nothing new (Score:4, Informative)

      by kripkenstein ( 913150 ) on Monday February 12, 2007 @03:03AM (#17979806) Homepage
      I agree that the paper isn't revolutionary. In addition, it turns out that, after the 4-layer biologically-motivated system, they feed everything into a linear SVM (Support Vector Machine) or gentleBoost. For those that don't know, SVMs and Boosting are the 'hottest' topics in machine learning these days; they are considered the state of the art in that field. So basically what they are doing is providing some preprocessing before applying standard, known methods. (And if you say "but it's a linear SVM", well, it is linear because the training data is already separable.)

      That said, they do present a simple and biologically-motivated preprocessing layer that appears to be useful, which reflects back on the brain. In summary, I would say that this paper helps more to understand brain functioning than to develop machines that can achieve human-like vision capabilities. So, very nice, but let's not over-hype it.
  • by S3D ( 745318 ) on Sunday February 11, 2007 @06:08PM (#17975908)
    Gabor wavelets, newral networks, hierarchical classifiers in some semi-new combination - there are dozens image recognition papers like this every month. Why this exact paper is special ?
    • Indeed, Yann Le Cun and many other have done similar models, with much more impressive human level performance in arbitrary object recognition and rapid learning.. Max and Tommy have been selling this stuff as "biological plausible" etc.. mechanisms.. the fact is we barely understand how the brain at a system level does anything. So adequate classifier performance in the context of *claims" of bio plaus are hardly new or dramatic enough to post anywhere as a breakthrough.. S. Hanson -- Stephen J. Hans
  • by macadamia_harold ( 947445 ) on Sunday February 11, 2007 @06:23PM (#17975998) Homepage
    Researchers at the MIT McGovern Institute for Brain Research have used a biological model to train a computer model to recognize objects, such as cars or people, in busy street scenes.

    this is, of course, the first step in finding Sarah Connor.
  • This paper's claim to recognize scenes like the brain does, is overdrawn.
    As far as i can tell from their paper (it is a journal version of their cvpr paper) only their low-level Gabor features are similar to what the brain does.
    The rest of the paper uses the currently popular bag-of-features model, which is a model that discards all spatial information between image features, which i don't think the brain does. Furthermore, for classification algorithms they consider a Support Vector Machine and Boosting. B
    • Re: (Score:2, Interesting)

      by dfedfe ( 980539 )
      I admit I only gave the paper a quick read, so I can't say for sure. But my impression was that spatial information was only discarded in passing information to the next layer in the model. That strikes me as reasonable. For one, they're simulating the dorsal stream, which, in my understanding, is basically attended-object specific, so it seems proper to discard the relationship between the attended object and the rest of the scene. As for discarding spatial relationships between two features of the same ob
    • Furthermore, for classification algorithms they consider a Support Vector Machine and Boosting. Both of these classifiers are certainly not comparable to what the brain does. Why not use a neural network if they aim is to mimic the brain?

      Probably because a suitable ANN would take years to converge.
    • Re: (Score:3, Informative)

      by odyaws ( 943577 )
      Disclaimer: I work with the MIT algorithms daily and know several of the authors of this work (though I'm not at MIT).

      This paper's claim to recognize scenes like the brain does, is overdrawn. As far as i can tell from their paper (it is a journal version of their cvpr paper) only their low-level Gabor features are similar to what the brain does.

      Their low-level Gabor filters are indeed similar to V1 simple cells. The similarity between their model and the brain goes a lot further, though. The processing go

      • I would be very interested in your research, can you post some pointers to modeling feedback?

        The caltech datasets are in my opinion artificial, since they rotate all images in the same direction.
        For example, a moterbike always faces to the right, and the 'trilobite' is even rotated out of the plane (leaving a white background) so you only need to estimate the right angle of rotation.
        for example, see:
        http://www.vision.caltech.edu/Image_Datasets/Calte ch101/averages100objects.jpg [caltech.edu]
        you would never get a consiste
        • by odyaws ( 943577 )

          The caltech datasets are in my opinion artificial, since they rotate all images in the same direction. For example, a moterbike always faces to the right, and the 'trilobite' is even rotated out of the plane (leaving a white background) so you only need to estimate the right angle of rotation.
          For a less structured set of many pictures of many categories, check out the newer "Caltech-256" dataset on the page I linked - 256 categories, and much less uniform images.
  • My own two cents (Score:5, Interesting)

    by MillionthMonkey ( 240664 ) on Sunday February 11, 2007 @07:02PM (#17976318)
    I've written here before about epileptic seizures I have that start somewhere in the right occipital lobe possibly near V1, [wikipedia.org] based on the nature of the aura and a recent video EEG last month. [slashdot.org] These things started for no reason when I was a teenager and now involve these interesting post-ictal fugue states where only chunks of my brain seem to be working but I'm still able to run around and get in trouble. I've developed a talent over the years for coping with brain trauma and sort of bullshitting my way through it.

    Usually I'm not forming long term memories during fugue states, but when I do, I remember some pretty interesting stuff. One thing that is typically impaired is object recognition, since this mostly seems to be handled by the right occipital lobe. I can see things but can't immediately recognize what they are, unless I use these left-brain techniques. The left occipital lobe can recognize objects too, but the approach it takes is different and more of a pain in the ass to have to rely on. It's more of a thunky symbolic recognition, as opposed to an ability to examine subtle forms, shapes, and colors. I have to basically establish a set of criteria that define what I'm looking for and then examine things in the visual field to see if they match those criteria. I'll look for a bed by trying to find things that appear flat and soft; I'll look for a door by looking for things with attributes of a doorknob such as being round and turnable; I'll find water to drink by looking closely at wet things. My wife says I make some interesting mistakes, like once confusing her desk chair for a toilet (forgetting for a moment that part of a toilet has to be wet, but at that point memory formation and retrieval is disrupted to the point where I could imagine forgetting that it's not enough to just be able to be sat on, toilets have to have water in them too). I have trouble recognizing faces, and she says I'm sometimes obviously pretending to recognize her. Recognizing a face using cold logic can be tricky even when you're not impaired. Recognizing familiar scenes and places becomes difficult. I drove home in a fugue state once, back in my twenties, and while I didn't crash into anybody or have any sort of accident, I did get lost on the way home from work. I ended up driving miles past where I lived. Even as a pedestrian, getting lost in familiar areas is still a problem.

    People have been trying to come up with image processing algorithms that mimic cortical signal analysis for decades. I remember reading papers ten years ago like this. It's amazing to see they're still mistaking road signs for pedestrians. I don't think even I could make an error like that. The state of the art was totally miserable back then, too. Neuroscience has got to be one of the sciences most poorly understood by humans.
    • Comment removed based on user account deletion
      • I don't agree, not necessarily at least. It might be that from a certain level of intelligence, all intelligences are capable of doing the same things, just not necessarily as fast. The "General Intelligence" level so to speak.

        Besides, we can (and do) augment our intelligence by using computers and etc... I think some day we'll be able to understand our own brains.
    • It's amazing to see they're still mistaking road signs for pedestrians.

      These sorts of mistakes seem very common in computer vision, the system I used a few years back was forever mistaking trees for people. The problem is that there is a lot of variation in how people can look: what angle you are looking at them from, how their body is positioned and the colour of the clothes they wear. Creating an algorithm which can recognise all this variation can often lead to a system with many false positives.

      It loo

      • Slightly OT but i've said this before, computer-controlled-car (CCC) AI should not be designed to read human-targetted road signs but detect CCC-targetted transponders that describe the road/junction/roadworks ahead down to the cm. Then if you're driving in a CCC-enabled zone you can switch on the autopilot and let it do the driving.

        Obviously it'd still need to detect pedestrians, stray dogs and non-CCCs (and not crash dumbly if someone hacks the transponders) but a standard system like this would free up a
    • I wonder, probably a stupid question, but if you close one eye does it become harder or easier to function more normally?

      Just wondering out loud.

      K.
  • by Wills ( 242929 ) on Sunday February 11, 2007 @07:14PM (#17976400)
    Apologies for blowing my own trumpet here, but there was much earlier work in the 1980s and 1990s on recognizing objects in images of outdoor scenes using neural networks that achieved a similarly high accuracy compared to the system mentioned in this article:

    1. WPJ Mackeown (1994), A Labelled Image Database, unpublished PhD Thesis, Bristol University.

    Design of a database of colorimetrically calibrated, high quality images of street scenes and rural scenes, with highly accurate near-pixel ground-truth labelling based on a hierarchy of object categories. Example of labelled image from database [kcl.ac.uk]

    Design of a neural network system that recognized categories of objects by labelling regions in random test images from the database achieving 86% accuracy

    The database is now known as the Sowerby Image Database and is available from the Advanced Technology Centre, British Aerospace PLC, Bristol, UK. If you use it, please cite: WPJ Mackeown (1994), A Labelled Image Database, PhD Thesis, Bristol University.

    2. WPJ Mackeown, P Greenway, BT Thomas, WA Wright (1994).
    Road recognition with a neural network, Engineering Applications of Artificial Intelligence, 7(2):169-176.

    A neural network system that recognized categories of objects by labelling regions in random test images of street scenes and rural scenes achieving 86% accuracy

    3. NW Campbell, WPJ Mackeown, BT Thomas, T Troscianko (1997).
    Interpreting image databases by region classification. Pattern Recognition, 30(4):555-563.

    A neural network system that recognized categories of objects by labelling regions in random test images of street scenes and rural scenes achieving 92% accuracy

    There has been various follow up research since then [google.com]

    • by Wills ( 242929 )
      The PhD thesis title got truncated during cut-and-paste:

      WPJ Mackeown (1994), A Labelled Image Database and its Application to Outdoor Scene Analysis, unpublished PhD Thesis, Bristol University.
    • Bah, Bristol University. I'll only take it seriously if it is from MIT.

      :-)

    • Plus, bah, neural networks.

  • OK, so the brain recognizes scenes (haven't read the article) .. but how come I read "Recognizing Scenes Like Brian Does"??
  • by rm999 ( 775449 ) on Sunday February 11, 2007 @07:24PM (#17976488)
    Creating "biologically inspired" models of AI is by no means a new topic of research. From what I can tell, most of these algorithms work by stringing together specialized algorithms and mathematical functions that are, at best, loosely related to the way the brain works (at a high level). By contrast, the brain is a huge, complicated, connectionist network (neurons connected together).

    That isn't my real problem with this algorithm and the 100s of similar ones that have come before it. What bothers me is that they don't really get at the *way* the brain works. It's a top-down approach, which looks at the *behavior* of the brain and then tries to emulate it. The problem with this technique is it may miss important details by glossing over anything that isn't immediately obvious in the specific problem being tackled (in this case vision). This system can analyze images, but can it also do sound? In a real brain, research indicates that you can remap sensory inputs to different parts of the brain and have the brain learn it.

    I'm still interested in this algorithm and would like to play around with the code (if it's available), but I am skeptical of the approach in general.
  • My AI page [geocities.com]

    Once you have the ability to interpret vision into 3d objects, you can then classify what they are and what they're doing in a language(English is good enough). You can then enter in sentences and the AI would understand the representation by 'imaginging' a scene. And what you have isn't really a thinker, but software that understands English and can be incorporated into robots too.
    • but software that understands English and can be incorporated into robots too.

      Yeah, because NLP is a closed problem just like vision.

      While you're at it, why don't you just power the thing with a perpetual motion machine.

    • by a4r6 ( 978521 )
      As Sir Isaac Newton said,

      "If I have seen a little farther than others, it's because I stood on the shoulders of giants."
      You need to learn what is already known (IOW climb up onto those shoulders,) before you try to offer any insight, or you're really wasting your time.
  • "This versatile model could one day be used for automobile driver's assistance, visual search engines, biomedical imaging analysis, or robots with realistic vision."

    Or to automatically scan streets, airports, bus stations, bank queues, etc. for "wanted" persons, terrorists, library fine evaders, dissidents, etc ad nauseum.

  • by Maxo-Texas ( 864189 ) on Sunday February 11, 2007 @07:42PM (#17976620)
    It's going to change everything.

    Robotic vision is a tipping point.

    A large number of humans become unemployable shortly after this becomes a reality.

    Anything where the only reason a human has the job is because they can see is done in the 1st world.

    Why should you pay $7.25 an hour (really $9.25 w/benefits & overheard for workers comp, unemployment tax, etc.) when you can buy a $12,000 machine to do the same job (stocking grocery shelves, cleaning, painting, etc.).

    The leading edge is here with things like roomba's.
    • when you can buy a $12,000 machine to do the same job

      That's a great argument you make, except nothing that is programmed and isn't a mass market product costs $12,000. You're not going to buy a machine that can stock shelves, do cleaning, and painting. These are going to be seperate machines and they're each going to cost millions of dollars. The market for these machines? The same traditional market: production lines. It's just way too cheap to hire unskilled labor than it is to buy a machine to replace them - unless the job is dangerous - and sometime

      • Computers used to cost millions. It used to be cheaper to have humans to addition than via a machine. Things change.
        • by QuantumG ( 50515 ) *
          And if you knew anything of the history of computers, you'd understand why robots working minimum wage jobs is still so far away.
      • Now, of course, if someone was to design and build a [$12,000] robot, completely for their own interest, that could build copies of itself, *and* do useful work like stocking shelves...


        We've got an overstock of these in California, Texas, Nevada, Arizona and New Mexico. We'll be glad to ship 'em either north _or_ south if y'all will pay the freight or, at the very least, provide a destination address.
      • I could argue this with you, but I don't think that's the right tack because it doesn't address my basic point.

        My point is this:
        Robots can't replace many human jobs now because they cannot see.

        Once robots can see, there will be a point where many "menial" jobs can be performed by them.

        We need to start thinking about how we are going to handle the huge numbers of people who are only qualified for menial work now before we get to that day.

        We may disagree on if that is 5 years (unlikely but possible) or 100 ye
        • by QuantumG ( 50515 ) *
          What I am saying is that this will either happen gradually, in which case the problem will sort itself out, or it will happen disruptively.. and if it happens disruptively then I think we can agree that we have a whole shitload more problems than the unemployed. Seriously, think about it. If you can make a robot that can stock shelves then, it follows, you can make a robot that can identify and shoot people. It's not too hard to imagine revolutionaries building a robot army. The disruption of instant ro
        • by ebuck ( 585470 )
          Once all the "menial" jobs are replaced, who's going to pick up the slack for the 80% of the population that's no longer spending money on said items since they don't have a job?

          We are so dependent on capitalisim that we can't just destroy all of the lower classes in a fell swoop. We need those people to be buying the food off the grocery store shelves.

          Another weakness is that the operational costs of a grocery store robot have to be absorbed by the grocery store. Now you might not be fully aware, but mos
          • You left of
            9. Self checkout machines. These allow one human to check out 6 customers at a time.

            Look at my other thread for a cost analysis.
            Most of your argument is predicated on robots being expensive.
            Given $55k robots, in my other post, I show it's cheaper than high school students.
            At $55k, you have three robots.
            Breakdowns are just a maintenance and SLA issue (except for the forklift issue but that won't stop robots any more- just slow them down- especially with today's surveillance abilities- likewise, w
        • Likewise an automatic cleaning robot for buildings- our building has a staff of 20 every night.

          I worked for a janitorial service when I was in my late teens, and wouldn't really be confident in a robot's ability to do that. It sounds simple on the surface: Clean floors, empty trash, right?

          One of our clients was the local symphony. One office in particular stands out in my mind; when you opened the door, at LEAST 1 page of a stack of paperwork came flying off due to the sudden breeze. Sometimes you saw it fly if they left the light on, but usually you just heard it move. You walked in, put th

          • Lol...kleptomaniac kleaning krew.

            We already have roombas that can find their plug and plug back in.

            While there will be special cases (like the symphony), a lot of generic offices would do just fine with a fleet of roombas that come out to vacuum at night and then return to a storage closet.
    • Because industrial robots break down reasonably often.

      Sure people are unreliable for all sorts of reasons, but they don't break down as often and usually have initiative to think through new situations (even a grocery shelf stacker).

      • A car costs $11,000 to $35,000. Some very small run cars run $55,000.

        They require maintenance but they really only even start breaking down after a few years (75-80 thousand miles).

        Say a Kroger Stocking robot cost $55,000 and it requires $3,000 a year maintenance before being worn out after 5 years (total cost $60,000). It doesn't break down, it doesn't call in sick, and it can work seven days a week.

        Having two low wage humans work a full shift 7 days a week all year runs about $36,000 a year after matchi
    • by jacobw ( 975909 )
      Forget about silly functions like stocking grocery shelves, cleaning, etc. A friend of mine has invented a system that allows AI to do the single most important human activity:

      Watching reality TV [mit.edu].

      That's right. When the new visually acute robots put you out of a job, and you take your severance check and slink home to watch "Cops," you'll find a robot already hogging the La-Z-Boy, remote control in hand. Not only are we obsolete--our obsolescenece is obsolete, too.
  • Reading vision papers is very frustrating. At one time I had a shelf full of collections of papers on vision. You can read all these "my algorithm worked really great on these test cases" papers, and still have no idea if it's any good. You can read the article on the vision algorithm used by the Stanford team to win the DARPA Grand Challenge [archive.org], and it won't be obvious that it's a useful approach. But it is.

    This is, unfortunately, another Roland the Plogger article on Slashdot. So this probably isn't

  • It's the government in collusion with aliens at MIT that want to watch what we do 24x7...George Orwell...Ayn Rand...can your telephone cause testicular cancer? Find out at 11 on Fox news...
  • by rossz ( 67331 ) <ogre@@@geekbiker...net> on Monday February 12, 2007 @12:24AM (#17978828) Journal
    Come on, you all want this! A near perfect pr0n search engine.
  • Am I the only one who sees "Homeland Security" written all over this?

  • by HuguesT ( 84078 ) on Monday February 12, 2007 @09:28AM (#17981734)
    This is a nice paper by respected researchers in AI+Vision, however pretty much the entire content of the journal this was published in (IEEE Pattern Analysis and Machine Intelligence) is up to that level. Why single out that particular paper ?

    Interested readers can browse the content of PAMI current and back issues [ieee.org] and either go to their local scientific library (PAMI is recognisable from afar by its bright yellow cover) or search on the web for interesting articles. Often researchers put their own paper on their home page. For example, here is the publication page of one of the authors [mit.edu] (I'm not him).

    For the record, I think justifying various ad-hoc vision/image analysis techniques using approximations of biological underpining is of limited interest. When asked if computer would think one day, Edsgerd Dijkstra famously answered by "can submarine swim?". In the same manner, it has been observed that (for example) most neural network architectures make worse classifiers than standard logistic regression [usf.edu], not to mention Support Vector Machines [kernel-machines.org], which what this article uses BTW.

    The summary by our friend Roland P. is not very good :

    This versatile model could one day be used for automobile driver's assistance, visual search engines, biomedical imaging analysis, or robots with realistic vision


    • There already exist working automated driving software. The december 2006 issue of IEEE Computers magazing [computer.org] was on them last month. Read about the car that drove a thousand miles [unipr.it] on Italy's road thanks to Linux, no less.
    • Visual search engine exist, at the research level. The whole field is called "Content Based Retrieval", and the main issue is not so much to search, but to formulate the question.
    • Biomedical image analysis has been going strong for decades and is used every day in your local hospital. Ask your doctor !
    • Robotic vision is pretty much as old as computers themselves. There are even fun robot competitions like robocup [robocup.org].


    I could go on with lists and links but the future is already here, generally inconspicuously. Read about it.
  • by tiluki ( 74844 )
    Like any worthy vision paper, it includes images of Lena!

    http://www.cs.cmu.edu/~chuck/lennapg/lenna.shtml [cmu.edu]

  • My comment a week ago on how the brain works:

    http://slashdot.org/comments.pl?sid=221744&cid=179 71112 [slashdot.org]

    Come on AI researchers, it's pattern matching that is what the brain does! it is so obvious!

    Does any of you read Slashdot?

    And all the operations of the brain can be explained in terms of pattern matching; even mathematics.
  • isn't this basically initial validation of Jeff Hawkins' (think Palm Pilot) theories?

    As described in: http://www.amazon.com/Intelligence-Jeff-Hawkins/dp /0805078533/sr=8-1/qid=1171294577/ref=pd_bbs_sr_1/ 002-9722002-6024059?ie=UTF8&s=books [amazon.com]
    P.

Our OS who art in CPU, UNIX be thy name. Thy programs run, thy syscalls done, In kernel as it is in user!

Working...