Beta

×

Welcome to the Slashdot Beta site -- learn more here. Use the link in the footer or click here to return to the Classic version of Slashdot.

Thank you!

Before you choose to head back to the Classic look of the site, we'd appreciate it if you share your thoughts on the Beta; your feedback is what drives our ongoing development.

Beta is different and we value you taking the time to try it out. Please take a look at the changes we've made in Beta and  learn more about it. Thanks for reading, and for making the site better!

Neuromorphic Algorithms Allow MAVs To Avoid Obstacles With Single Camera

timothy posted about a year and a half ago | from the toys-r-us-next-year dept.

AI 39

First time accepted submitter aurtherdent2000 writes "IEEE Spectrum magazine says that Cornell University has developed neuromorphic algorithms that enable MAVs to avoid obstacles using just a single camera. This is especially relevant for small and cheap robots, because all you need is a single camera, minimal processing power, and even more minimal battery power. Now, will we see more of the drones and aerial vehicles flying all around us?"

cancel ×

39 comments

Sorry! There are no comments related to the filter you selected.

MAV? (4, Informative)

Anonymous Coward | about a year and a half ago | (#41897095)

I'm not sure what a MAV is....
Googling...
http://en.wikipedia.org/wiki/Micro_air_vehicle [wikipedia.org]

Would it have killed the editors to define that?

Re:MAV? (1)

ryzvonusef (1151717) | about a year and a half ago | (#41897365)

OMG, You killed timothy!

Re:MAV? (1)

fredrated (639554) | about a year and a half ago | (#41897743)

I don't get it.

Re:MAV? (2)

sconeu (64226) | about a year and a half ago | (#41898053)

You bastard!

Re:MAV? (0)

Anonymous Coward | about a year and a half ago | (#41897379)

You beat me to it. I understand I have to use search engines to decode acronyms in comment feeds, but would the editors please consider linking (or disambiguating with parenthesis) unapparent acronyms?

I first thought MAV would mean "manned aerial vehicle" to distinguish it from a drone, but that made no sense contextually. Thanks, fellow AC (Anonymous Coward)!

Do Not Trust (1)

fragtag (2565329) | about a year and a half ago | (#41897197)

I went to school with a girl who had no depth perception what-so-ever. She had three accidents in 2 years before anyone realized that she couldn't tell how far away things were. I don't think I want a autonomous drone flying above my head like that.

I would be interested to know if this robot suffers the same problem as birds do when they fly into windows. I might just pay good money to see a pack of drones crash into a glass building.

Re:Do Not Trust (1)

wisnoskij (1206448) | about a year and a half ago | (#41897617)

Except, there are many ways to fake depth perception with only one stationary eye.

Re:Do Not Trust (1)

NatasRevol (731260) | about a year and a half ago | (#41898869)

Faking it is one thing. Getting it to work well is another.

Especially when there's a weapon strapped to it.

Re:Do Not Trust (1)

ShanghaiBill (739463) | about a year and a half ago | (#41899789)

Except, there are many ways to fake depth perception with only one stationary eye.

Except these eyes are not stationary. These are aerial vehicles. They are moving.

Re:Do Not Trust (1)

wisnoskij (1206448) | about a year and a half ago | (#41899975)

Yes, and moving eyes should make it even easier, in my estimate.

Re:Do Not Trust (0)

Anonymous Coward | about a year and a half ago | (#41897689)

I might just pay good money to see a pack of drones crash into a glass building.

Be careful you might be funding terrorism...

Re:Do Not Trust (0)

Anonymous Coward | about a year and a half ago | (#41897917)

I went to school with a girl who had no depth perception what-so-ever. She had three accidents in 2 years before anyone realized that she couldn't tell how far away things were.

Sounds like a face saving excuse. People with one eye can drive just fine. When driving you use motion parallax, not two eye parallax which is good to about 15 feet. This "drone" is using motion parallax, just like every driver on the road. This girl was a poor driver, nothing more. One accident a year is what I've seen from most poor drivers. That's about as fast as they can make up excuses and blame others for it. More than that and they finally admit there's a problem.

Re:Do Not Trust (2)

Forty Two Tenfold (1134125) | about a year and a half ago | (#41898157)

IIRC optical accommodation of human eye is good for approximating distances up to about 200ft.

Re:Do Not Trust (0)

Anonymous Coward | about a year and a half ago | (#41899917)

He didn't say that she has one eye. I think they would have noticed that quickly. If the story is true, it sounds like a neurological problem.

Re:Do Not Trust (3, Informative)

Abreu (173023) | about a year and a half ago | (#41898041)

My mother had a car accident in her twenties and lost sight in one eye. She spent years relearning how to perceive distance, but eventually she went back to her normal life. She could drive perfectly, both in the city (and driving in Mexico City is not for amateurs) and on the road.

Re:Do Not Trust (0)

Kurrel (1213064) | about a year and a half ago | (#41898885)

Good thing computers don't have terrible algebra skills like girls do!

Re:Do Not Trust (1)

TheLink (130905) | about a year and a half ago | (#41903309)

I see people managing to fly drones (or drive) fine using 2D computer screens. And I also see people managing to crash often despite having depth perception.

So I think her problem lies elsewhere.

Clumsy (0)

Anonymous Coward | about a year and a half ago | (#41897363)

Looks really clumsy. The thing has no idea of the space around it, barely managing to dodge at the last moment.

Makes me wonder (1)

GoodNewsJimDotCom (2244874) | about a year and a half ago | (#41897615)

How does the robot know a certain location is not traversable? I know it is possible to use one camera and a large database of things to get even a 3d guess of its environment without moving. One camera and moving, and suddenly you have all the data to work with. The problem is, no one has developed software that you walk around a building with a video camera, and it becomes a quake level. So unless they did that, I'd be interested in how they find out what is not traversable.

Re:Makes me wonder (1)

ceoyoyo (59147) | about a year and a half ago | (#41899135)

"The problem is, no one has developed software that you walk around a building with a video camera, and it becomes a quake level."

Yes, actually, creating 3D models from pictures from multiple perspectives (generally acquired with video) is fairly standard. I remember seeing a DIY project, possibly here on Slashdot, using a webcam and a record turntable to create a 3D object scanner. You could make one that would make you Quake levels if you wanted to.

No, that doesn't seem to be what they've done here, probably because it requires too much processing power to do in realtime on an autonomous vehicle. It navigates the same way you do, by guessing which area of the image, based on shape, texture and colour, represents objects that are a) close and b) solid.

Re:Makes me wonder (1)

citizenr (871508) | about a year and a half ago | (#41900941)

The problem is, no one has developed software that you walk around a building with a video camera, and it becomes a quake level. So unless they did that, I'd be interested in how they find out what is not traversable.

http://www.robots.ox.ac.uk/~gk/PTAM/ [ox.ac.uk]
http://www.youtube.com/watch?v=CZiSK7OMANw [youtube.com]
http://www.youtube.com/watch?v=mimAWVm-0qA [youtube.com]

Will this lead to other new technology? (1)

FilmedInNoir (1392323) | about a year and a half ago | (#41897909)

Cause they will need to engineer anti-drone drones now that everyone can afford drones.

Re:Will this lead to other new technology? (1)

Luckyo (1726890) | about a year and a half ago | (#41898039)

We've had that since before world war 2. It's called AAA, Anti-air artillery. Modern automatic AAA swats small drones out of the sky faster then you can launch them.

Or you can just jam their control signals, fake your own and have them land on your airfield.

Or, if you're talking about neighborly relations, I'm pretty sure that shotguns that are used to hunt birds will make for a wonderful counter if someone decides to be dumb enough to watch you fap in the shower.

Re:Will this lead to other new technology? (1)

FilmedInNoir (1392323) | about a year and a half ago | (#41898747)

Wernstrom...! I'll make you eat those words when you see my $50 billion dollar super-sonic ramjet-powered flyswatter!

Re:Will this lead to other new technology? (0)

Anonymous Coward | about a year and a half ago | (#41901113)

I would like to see a couple of those drones get past one of these http://en.wikipedia.org/wiki/Goalkeeper_CIWS [wikipedia.org] .

depth perception, one camera (1)

CosaNostra Pizza Inc (1299163) | about a year and a half ago | (#41897983)

I guess depth perception is overrated.

Re:depth perception, one camera (1)

GoodNewsJimDotCom (2244874) | about a year and a half ago | (#41898101)

I believe you can have depth perception with one camera, so long as it is moving. Because you remember the previous position, and your current position, and you have two images. Depending on how you define depth perception, a person can close one eye, walk into a hallway and envision the situation in 3d to navigate the hallway and avoid the obstacles. I would even go so far to say when coding a system to render vision into 3d, it might be easier on the programmer to start with just one camera instead of jumping into stereoscopic rendering.

Re:depth perception, one camera (1)

JanneM (7445) | about a year and a half ago | (#41900389)

It _is_ overrated by quite a lot of people, in the sense that they believe stereo vision is the be-all end-all of depth perception.

Reality is more complicated. We use stereo vision only as one depth cue among several others, and mostly in close-up situations. Apart from a few kinds of cases such as rapid, precise object manipulation it's not a particularly important one.

Consider that most animals do not have stereo vision (their monocular fields of view do not overlap) and can navigate a complex, cluttered environment just fine. As can people that have lost sight in one eye. And none of us has any trouble accurately estimating depth when watching a movie or television; that shows, by the way, that active parallax is not essential either.

Re:depth perception, one camera (1)

aXis100 (690904) | about a year and a half ago | (#41901785)

A good example is first person shooter computer games.

No-one has trouble navigating or dodging obstacles (though maybe reflexes let them down) even though we are viewing a video with no stereo clues. Object size, motion parallax and perspective are enough.

Re:depth perception, one camera (1)

CosaNostra Pizza Inc (1299163) | about a year and a half ago | (#41911863)

I just know that I have bad vision in one eye...and that I miss out for 3D movies.

processing power... (3, Insightful)

slew (2918) | about a year and a half ago | (#41898089)

I don't think that phrase invokes the same idea as most of the folks on /. The "neuromorphic" algorithms they allude to are the kind that run on highly specialized hardware (e.g., this beast [modha.org] ). This type of hardware really just works similarly to synapses (integrate & fire architecture). Of course you could simulate the algorithm on a more conventional processor, but it would probably lose much of it's low-power attribute.

FWIW, the algorithm they propose is attempt to identify objects that project up from the ground. To do this, they attempt to label parts of the image as obstacle (or not) taking a raw initial guess and filtering it with a pre-trained neural net (using some sort of adjacent region belief propagation technique).

I think they may have "cheated" a bit in that in some papers, they describe decomposing the image with oriented Gabor filters (edge orientation detectors), but they admit that this decompsition doesn't currently work well on their ultra-low-power computing platform.

FYI: MAV=micro aerial vehicle

Re:processing power... (1)

ceoyoyo (59147) | about a year and a half ago | (#41899209)

I doubt very much they used specialized hardware on their MAV. Neural net algorithms work just fine on conventional processors. If they did build specialized hardware they could make it REALLY low power, but 1 watt sounds like a regular processor.

The visual centres in your brain use something very much like Gabor filters, and they're not hard to implement in hardware, so if they did "cheat" by precalculating the filters it's not a big deal.

Re:processing power... (1)

drinkypoo (153816) | about a year and a half ago | (#41899289)

If the neural net were to run on a swarm of MAVs you'd have plenty of processing power, so long as you didn't move too many of them at once, or only moved them together. But then, while they're together you can use stereo vision...

Re:processing power... (1)

citizenr (871508) | about a year and a half ago | (#41900965)

If the neural net were to run on a swarm of MAVs you'd have plenty of processing power, so long as you didn't move too many of them at once, or only moved them together. But then, while they're together you can use stereo vision...

neural nets are not cpu intensive, its learning that is hard.

Re:processing power... (0)

Anonymous Coward | about a year and a half ago | (#41905799)

I've wondered about that: is learning CPU intensive, or is it that learning takes a long time?

Re:processing power... (0)

Anonymous Coward | about a year and a half ago | (#41909775)

I would hope in any solution, People and Animals need to be readily detectable and avoided at all costs.

The learning isn't so much an issue as image capture / processing / reaction in real time. This is frankly a function of (capture rate * resolution) vs speed the camera is traveling. Faster you go, quicker you have to be able to process. An idea would be to integrate radar / sonar, as well as optical to create a virtual model of the path with hazard avoidance to reduce granularity of images to process. This method for example, could distinguish a "leaf" from a broken piece of glass or metal spike.

Flinch reflex. (1)

Animats (122034) | about a year and a half ago | (#41898125)

This is the logic behind a flinch reflex. It's just enough approaching obstacle detection to avoid hitting stuff. It's good to have in a UAV that has to operate near obstacles. It's not full SLAM, but it doesn't need to be.

Nice. Now get it into the toy helicopter market.

Sure Looked Pretty (1)

LifesABeach (234436) | about a year and a half ago | (#41898541)

I trust the group presenting this, but I could not verify their conclusions.
Check for New Comments
Slashdot Login

Need an Account?

Forgot your password?
or Connect with...

Don't worry, we never post anything without your permission.

Submission Text Formatting Tips

We support a small subset of HTML, namely these tags:

  • b
  • i
  • p
  • br
  • a
  • ol
  • ul
  • li
  • dl
  • dt
  • dd
  • em
  • strong
  • tt
  • blockquote
  • div
  • quote
  • ecode

"ecode" can be used for code snippets, for example:

<ecode>    while(1) { do_something(); } </ecode>