Intel Releases Open-Source Stereoscopic Software 129
Eslyjah writes "Intel has released a software library that allows computers to "see" in 3D. The library is available for Windows and Linux, under a BSDish license. Possible early applications include lipreading input. Check out the CNN Story, Intel Press Release, and project home page."
ANother feeble attempt.. (Score:2, Interesting)
Nobody needs it. Nobody. Those who do - will write their own stuff..
Re:ANother feeble attempt.. (Score:2)
-Restil
Re:ANother feeble attempt.. (Score:1)
Re:ANother feeble attempt.. (Score:2, Interesting)
I've been using this library the last couple years, and it reduces CPU demand. I was doing pupil tracking using it only for image capture, and I needed two PIII-860Mhz with 400Mhz RAMBUS memory. Using it for some of the basic image processing reduced demand so it almost ran on a single Athelon, I think the memory was too slow on those boxen, but I left the project at that point. Fast enough was 30 fps, and it was difficult even with the library. (We needed the 3D position of the eyes reliably at video rates for a stereo display and didn't want the user to have to wear anything.)
They have pretty decent MMX algorithms which work out of the box. I'd have to say those that do write in C unless they absolutely can't avoid it. I was happy to squeeze an extra two cycles out of the inner loop of my MMX code, but I'd really rather have Intel do it since it's not the core of my work.
Computers are slow and always will be, I'm a graphics person so 6 hours to render a frame is really slow, to real vision people processing an image for depth overnight is fast. We invent ways to use the CPU much well ahead of Moore's law. In another 50 years we'll be able to emulate dog vision! maybe... just run it overnight
(I'm speaking of the general library, this press release is just another algorithm added. They basically add common algorithms to the library, slowly.)
Re:ANother feeble attempt.. (Score:1)
Its just the way they market it that ticks me off.. My point - these are not "consumer" solutions..
On the unrelated note - I had one job writing astrnomical image processing code. First, I did it the usual way, using a library floating around in the community. Then I gave it a thought, and wrote my own code based on my old DWT code (that's for discrete wavelet transform it is).. New statistics turned out about 10 to 15 times faster to calculate, and more robust to boot. Yahoo. Use your brains - beat the Moore, man.. ;-))
Re:ANother feeble attempt.. (Score:1)
There is actually a long history of trying to develop "the" computer-vision library, but this one actually has an impressive cross-platform user base [yahoo.com]. To my knowledge, the 2nd most popular library is the Microsoft Vision SDK [microsoft.com].
But so far the only application has been (Score:1, Offtopic)
John
Re:But so far the only application has been (Score:1)
an open source smart bomb...
Fix the link (Score:1)
Usefulness? (Score:1)
1. 3D Research
2. Games
I honestly cannot think of any other logical uses for this library. And the places that do either category probably already have this, but it's nice to see some free stuff though. Now if only they'd GPL it...
Woops! (Score:1)
Re:Usefulness? (Score:4, Interesting)
Sure, it might sound a bit far fetched.
But.. a computer being able to actually construct something in 3 dimensions instead of simply a colorfield has *huge* implications with regards to image recognition.. it really does.
Take a single image of, say, a user's hands, for gesture recognition. How do you recognize the hands from the background? Color.... heuristics.. and what not.
But now.. it's simple. It's the stuff that's close to you! It's a completely different way to look at things.
Yeah ! Hand moves ! (Score:1)
Middle finger on it's own is Ctrl-Alt-Del
A fast pumping motion is WebPorn
Small Finger and Index are for launching windows
A close fist to Quake some arse...
A shooting motion to "Kill" a process...
Yeah, I can see how this will improve the pantonime in front of my computer... 8)
Re:Usefulness? (Score:1)
Re:Usefulness? (Score:2, Interesting)
Mapping, and lots of different kinds of aerial/satellite photo analysis. You don't have to be looking directly at something to see a 3D view of it--you can look at stereo images--and so could a computer. Examples at Lunar and Planetary Institute: 3D Mars images [usra.edu].
Eventually, autopilots for cars.
Re:Usefulness? (Score:4, Interesting)
One word: mensuration.
Read that again. It's not a typo. Mensuration (which generally means the act or process of measuring) specifically means figuring out how tall buildings or features are from satellite photographs. Traditionally it involves calculating heights trigonometrically based on the time of day, latitude, and angle of lens inclination from which the photograph was taken, using shadows as points of reference.
This is not a research project. It's a very practical process, with tons of applications in the commercial world.
My point is that there are a lot of image analysis techniques that you've probably never heard of before. Don't mistake a lack of experience on your part for a lack of usefulness on theirs.
Re:Usefulness? (Score:2, Informative)
t.
Re:Usefulness? (Score:1)
I'm being a pedantic bastard, but according to webster [m-w.com] it doesn't specifically mean that. Where are you getting your definition from? Maybe I'll learn something..
Re:Usefulness? (Score:1)
As the complete asshole foobar104 noted, it generally means the "act or process of measuring". You're looking at the online version of Merriam-Webster, which is VERY general.
Go check the Oxford Dictionary of the English Language.
Re:Usefulness? (Score:2)
From some remote sensing guys that I work with. I don't pretend to have all the jargon right, but I'm pretty sure about this one.
Good thing... (Score:1)
First time around, it looked like something a lady would be doing at 'this time of the month.' -_-;
Re:Usefulness? (Score:1)
I'd better unplug my computer at night (Score:3, Funny)
When is a link not a link... (Score:4, Informative)
Bad link (Score:2, Informative)
Didn't they see 2001? (Score:5, Funny)
Didn't we learn anything from 2001? You would think that people wouldn't be so eager to teach computers to read lips.
Re:Didn't they see 2001? (Score:2, Funny)
To mimic him, just let your lips set together and barely move them when you talk. Don't worry about being intelligible. (I'm pretty sure they voiced over in that scene.)
Re:This is why I will always support Intel (Score:2, Insightful)
I have a few issues with what you said. 1)The fact that intel had to release a compiler specially designed to work with it's processor to get it to get all the perfomance out of it is goofy, I can probably bet that if AMD wanted to, they could release an AMD optimised compiler and do the same thing. Also, to get the performance out of that compiler you would have to recompile everything to use it(or wait for your closed-source software makers to provide you witha build) 2) The marketing numbers are made by benchmarking intel processors and AMD processors. The AMD marketing number is the approx. clock speed of a comprable intel processor. If anything, you should flame intel for making their instruction pipeline hideously long to get their processors to ramp up to high clock speeds(sacrificing performance at the same time).
"Our web server used to run on a 1.4GHz Thunderbird, which was cheap but notoriously unstable" The processor probably wasnt unstable, I have a similar system without problems, it's probably another part of your system. Processors either work or they dont you can't have a half-broken processor
"And when the fan blew out a month ago, the whole computer was taken with it!"
Why would you use a cheap fan in a webserver for a buisness? Why didnt you use monitoring software to automagically shut the system down when the fan went out?(and I know about the video...that's a different situation, in the Tom video, they took the heatsink off, with the heatsink on, there should be enough time to shut it down.
Re:This is why I will always support Intel (Score:1)
Re:This is why I will always support Intel (Score:1)
the first thing to check out... (Score:1)
Dear editor, please check this [cwru.edu] out first...
I'm sure chrisd wanted to have the l33t first (story)post..why can't we mod down
Re:the first thing to check out... (Score:1)
Re:the first thing to check out... (Score:1)
Stereoscopic imaging (Score:2, Funny)
-
Not 3D Rendering, 3D Viewing (Score:5, Informative)
A cool application (I haven't seen if they've done this yet) is rendering in Open GL the internal view of what the robot eyes see. It would allow you to walk through a building, and then have a 3D model for various other uses. Reverse engineering blueprints.
THis would be great technology to have on any mars lander, or even just to analyze the data sent back.
Re:Not 3D Rendering, 3D Viewing (Score:2)
There are some commercial apps (I'm thinking of something from RealViz, but I can't remember the name) that do something like this-- generating 3D models out of a number of photos of an object-- but they require a user to set a number of control points before processing. The number of control points required ranges from the tiresome through the annoying all the way up to the silly.
Doing it automatically (or at least semi-automatically) would be a pretty serious upgrade to that software.
Re:Not 3D Rendering, 3D Viewing (Score:1, Interesting)
Re:Not 3D Rendering, 3D Viewing (Score:1)
Re:Not 3D Rendering, 3D Viewing (Score:1)
Why? (Score:1)
Sure, if might be of an advantage to people who need their computer to read their lips. All in all, though, it is of very little use.
I can see it now, webcams that have stereoscopic input. A whole new breed of porn/camwhoring'll rear it's useless, ugly head.
Our computers don't need to see us. We don't need THEM telling us we're all ugly.
Re:Why? (Score:2)
Oh, you small-minded moron. See my post elsewhere in the thread about mensuration. There is more image processing going on in the scientific, commercial, and government worlds than you realize.
Automated Car drives ? (Score:1)
Could also help the car find it's place on the road, caus 3D would allow positionning
Could have REAL Biometrics
Could have Real 3D Pron...
Uh, I think my brain just took the wrong turn here 8)
Re:Automated Car drives ? (Score:1)
The first application... (Score:2, Funny)
Application Idea (Score:3, Interesting)
What's the use? Well, besides the obvious uses in architecture, etc., how about being able to play (insert favorite 1st person shooter game here) in your house?
Geeze, now that I think about it, maybe this isn't such a good idea [slashdot.org]. What would JonKatz [slashdot.org] think?
Re:Application Idea (Score:2, Funny)
His friend Junis in Afghanistan has had this system running for years on his C64.
Here's some cool uses for 3D computer vision (Score:4, Interesting)
How about a way to have a PC recognize the position of your fingers and hands. You could use this to manipulate shapes in 3D in a 3D rendering and animation program WITHOUT SPECIAL GLOVES. You'd simply gesture into something like 3DStudioMax, Lightwave, or Caligari TrueSpace and create shapes by molding them with your fingers.
Or wouldn't it be cool to develop a "hand gesture API" which you could use to say play a karate game??? Think about a 3D Bruce Lee in front of you kicking, and you moving your OWN hands in front of the monitor to block it (and if you wear those cool 3D shutter glasses now common on graphics cards you will essentially have a low-budget VR system).
Or how about a driving game where you use no driving wheel but rather simply move your hands IN THE AIR. The game could be smart enough to recognize when you shift your hands away from an invissible steering wheel to grab an invissible gear stick on your side.
Or how about a tool to allow people like Stephen Hawkins gesture expressions and small movements in the air and have the computer react to these actions (like moving a wheelchair around, turning lights on and off, calling on the phone, changing TV channels, etc).
Think about the possibilities!!!
Re:Here's some cool uses for 3D computer vision (Score:3, Interesting)
What about mouse clicks???
Tap of the finger?
Hmm, interesting concept anyway.
I'd like to use this to analyze pictures taken from a model airplane to create 3d plots of the ground contour!
mouse without the mouse? (Score:1)
Re:mouse without the mouse? (Score:1)
Real robots in RobotWars! (Score:2)
Re:Here's some cool uses for 3D computer vision (Score:1)
Re:Here's some cool uses for 3D computer vision (Score:1)
> You could use this to manipulate shapes in
> 3D in a 3D rendering and animation program
> WITHOUT SPECIAL GLOVES. You'd simply gesture
> into something like 3DStudioMax, Lightwave,
> or Caligari TrueSpace and create shapes
> by molding them with your fingers.
That's already been invented. You don't need 3D imaging, you don't need 3DS MAX and you don't even need a PC. You just need a piece of clay.
Sounds like CMU (Score:2, Insightful)
check the links!! (Score:1)
Re:check the links!! (Score:1)
Re:check the links!! (Score:1)
;p
Revenge (Score:1)
Intel pushing the processor curve.... (Score:2, Insightful)
This is pretty neat, but reminds me of something from a few years ago.
I interviewed with Intel and during the interview they said in no uncertain terms that they were actively trying to keep people upgrading their systems, and hence keep the dollars rolling in. At the time, the interviewer said that the technique was largely by helping Micro$oft to keep new OSs coming that required more and more horsepower to run properly.
This is very cool in its own right (or could be, I haven't looked at it completely), but strikes me as another way they can push that curve...
Great! (Score:2)
Now the field of Stereoscopic software (I know, I never even thought it existed) could open up thanks to the pushing of a sorta-powerful company.
Hey, at least we'll see an open standard file format.
possible uses (Score:1)
What I've been wanting forever (Score:2, Interesting)
Lip reading at the end of 2001 (Score:1)
Brillian business move (Score:1)
another possible use (Score:1)
Silent voice control (Score:2, Insightful)
I look forward to the day when I can dictate to my PC by just mouthing the words. Voice recognition and touchscreens will save the office worker from Repetetive Stress Injury and Carpal Tunnel Syndrome. Lipreading will make voice recognition practical for large offices and many other areas.
-Mike_L
Re:Silent voice control (Score:1)
Re:Silent voice control (Score:1)
Oh No! I'd hate a lipreading computer... (Score:2, Funny)
I'd hate for my corporate desktop to be recording everything I'd say for the posterity of HR - like my emails are today.
Even worse, imagine if my machine decided to take all of that personally...*shudder*
Re:Oh No! I'd hate a lipreading computer... (Score:1)
Chances are pretty good that when you need to swear profusely, your computer would be rebooting after a crash (and therefore would not be running lipreading software).
On the other hand, recording for posterity everything HR says can be very entertaining.
Wee (Score:1)
Good, but not new. (Score:4, Insightful)
"? nice try...) It's mishmash of reporter hype and stock text which describes computer vision in general ("Over the next 5 to 10 years, Intel Corp. expects computer vision to play a significant role in simplifying the interaction between users and computers"). The Sussex Computer Vision Teach Files [susx.ac.uk] page has a reasonable description of stereoscopic vision [susx.ac.uk] from 1994. Lip reading is not really a 3D problem, so stereoscopic capabilites aren't going to help much. Many of the other uses- 3D environment modeling, object modeling and recognition, etc, are being worked on (again, the algorithms aren't new, this is just a new open source implentation) but they're not easy.
I don't mean to sound pessimistic, though. OpenCV is really cool, both as a corporate contribution to open source, and as a programming library even if you never look at the code. And the Matlab interface means fewer MSVC++ sessions which end with me feeling homicidal
Its not that complicated, kids. (Score:5, Interesting)
Here's how you do make stereoscopic images with a digital camera:
Take a picture like you normally would, but be mindful of the position and angle of your camera.
2) Snap a picture.
3) If the subject you're photographing is close to you, take a small step to the right. If the subject is far away, take a large step to the right.
4) Aim your camera at the subject and photograph it again.
5) Pull up both images in the photo editor of your choice.
6) Arrange the photos side by side. The first image you took should be on the left, the second image you took should be on the right.
7) Sit directly infront of your monitor, and blur your eyes. If you cant blur them, try crossing them slightly. Try to focus on "the picture in the middle". If you still cant do it, hold up a pencil (eraser-side up) exactly halfway between your eyeball and the screen. Focus on the eraser. The image on the screen should pop out at you in stereoscopic 3D.
For some good examples of a stereoscopic images I took, go here. [ibiblio.org] Try the picture of the steering wheel first...Its really easy. You'll also see a number of stereo photos of Tumacoccori, an 18th century Spanish mission that got the shit beat out of it by native americans. You'll also find another picture thats rather interesting---It's a downward view of a deactivated nuclear missile still in the silo at the Titan Missile Museum outside of Tucson. The view extends about 20 floors below ground. If I were to have taken this photo in 1981 versus 2001, I would have been shot on sight.
Cheers,
Great Pics! (Score:1)
Re:Its not that complicated, kids. (Score:1)
"Most things in here don't react too well to bullets."
Re:Its not that complicated, kids. (Score:1)
While raytracing on my 386 25MHz(!) using POV and its (then) text interface, I tried doing exactly the same thing, and the results were fantastic - take your favorite scene, and re-render it with the camera moved to the right. I got some spectacular results - I don't have them still, but I somehow doubt they'd look so spectacular anymore.
On a side note, your pictures are great, just a shame my monitor is large enough to make viewing them properly virtually impossible (going cross-eyed is a poor substitute) Guess I'll have to hike up the res...
Re:Its not that complicated, kids. (Score:1)
Point Grey Research has something similar (Score:2)
Point Grey likes to use three-camera systems, with the cameras arranged in a triangle. This eliminates most ambiguities found with two-camera systems.
Algorithms to do this have been around for years, but only in the last few years has it become possible to do it in real time on commodity processors. Hans Moravec was the first, almost 30 years ago, back when it took him 20 minutes of mainframe time to process a stereo image. Point Grey was selling a DSP-based solution a few years ago. Now you can do it on consumer hardware.
Mobile robots should be getting much better shortly. Systems based on Polaroid sonars have the resolution of probing the world with the big end of a broom. Laser rangefinders cost way too much and have moving parts. Millimeter wave radar is complicated to use as an imager (although it opens most supermarket doors in the developed world.) Affordable, fast vision is finally here.
Useful technology (Score:1)
Finally! I can build my Nerf Sentry Gun! (Score:2, Funny)
not that groundbreaking (Score:2, Insightful)
Actually, IMHO, pure monocular vision is a more interesting (and challenging) problem-- it's pretty clear that human stereo vision is an exercise in redundancy, since we can do pretty well with one eye closed, not to mention the fact that we perceive all kinds of 3d structure in 2d contexts (like your favourite pr0n- umm, quake screenshot ;-). The fundamental question is how do we interpret 2d images into 3d models (or whatever representation we use in our heads)? This a distinctly different (and more difficult) problem from building a 3d model from a motion sequence or stereo pair.
Now... (Score:1)
Computer perception (Score:2, Insightful)
Huh? What do you mean? Well, close one eye (or put an eyepatch on) and look at your flat world. How far away is that streetlight? Hmm. How tall is that man? Hmm..
Of course, we as people are much MUCH MUCH better at percieving (interpreting) our visual environment than computers are. Humans generally have little trouble correctively percieving things such even through partial occlusions, changes in scale, orientation, distortion (glasses might make you able to see, but straight lines become anything but..) and changes in intensity and color.
Being able to get 3d information about objects aids greatly in interpreting what they are.
An image (2d) of a hand is (almost always) full of occlusions. These occlusions are diffucult to interpret in 2d because the edges in 2d have less differentiation than a depth-map would. (The distance metric is less ambigious!)
Wouldn't you like your computer to interact with you as if it was (a very obedient) human? This helps.
Industrial Applications (Score:1)
As the presenter put it "take a pair of sunglasses, smear them with light machine oil, put on the sunglasses, put on thick gloves, and then use a pair of chopsticks (alone) to try and pick something up". It was not clear at the time if they were using anything steroscopic or not. However it was shown that video inputs for the computers were from multiple angles.
One of the robots (an arm) that was under-development was made for use in a car factory to perform point spot welding. One of these things could easily punch a hole through the side of a car, so distance measurement was important at that time.
How much this has progressed since is unknown (to me at least). However the technology featured was specific and not necessarily within the price range that someone can just grab off the shelf.
Hopefully this imaging technology will give someone who wants to make their own robot a head start (somewhere). However I would rather stay away from it if it has any of the "capacity" of the robot in the Television article that I saw.
This information is nothing new (Score:1)
This is a super library for computer vision. The best I've seen. Highly optimized. Goes way beyond anything out there in the Open source domain.
Sadly, it seems to me that many of the
cool! (Score:2)
[1]
http://slashdot.org/comments.pl?sid=11896&cid=2
useful for visual effects work (Score:2, Interesting)
The other big headache is tracking a camera move. You basically feed the footage into a camera tracking program and define tracking points in the image; features in the frame which the computer follows as they move - the software uses a lot of maths to work out where in 3D space these 2D points are and creates a CG camera in your 3D app to match the move, you usually then have to build a rough proxy version of the set in 3D to go with it (unless the production has the bucks to spend and they get a LIDAR scan of the set). THEN you get to finally start putting in your 3D elements, that is if you haven't shat yourself and run out of the building screaming after a week or so of staring at the same bloody footage 10-12 hours a day...
Ahem... anyway - where this new system could come in useful is using depth perception to generate a z-buffer, which would allow the computer to isolate foreground and background objects - no need to blue screen you can just point and click an actor to get a matte. Tracking would be made easy(er, anyway) as you have an actual 3D plate to work with, feed it to one of those programs that can auto-model 3D geometry from photos and you get your proxy set for free too...
Big blue screen shoots are tough on actors, just ask anyone that worked on one of the new star wars movies. They have to spend hours waiting for the screens to be set-up and lit, and the choreography of shooting a scene with a digital character is painful to learn, not to mention the hassles and expense of shooting with a motion control camera. Not only would a system like this speed up production but presumably with a real-time z-buffer being generated the cast and crew could interact with lo-res versions of the CG characters in real-time on monitors to get a better feel of what they are doing.
In fact as a wider application, once we all have depth-percieving videophones you could matte in any image behind yourself you want - great for phoning in sick from the beach :)
TINA - Open Source Machine Vision Libraries (Score:2, Informative)
For the past 5 years TINA [tina-vision.net] has been provided as open source under an LGPL license and development is now based at the University of Manchester, UK. [man.ac.uk]
Whilst I am very pleased that Intel recognise the importance of machine vision research and can only commend them on their open source approach I have some reservations regarding the use of OpenCV by the research community at large. Certainly their motives are business orientated (and one cannot argue with this). Therefore, however, the contents of their library are ultimately dictated by what Intel want not necessarily what the research community might need or indeed what is even possible (such as dense estimates of stereo).
Open Source software is vital in research disciplines where there is a significant software component. What better way to disseminate your results than to encapsulate your entire experimental apparatus in a tar file! Why should others in the field waste time reimplementing your algorithms (probably incorrectly) in order to duplicate your results. A process which sits at the very heart of any scientific endeavour.
TINA [tina-vision.net] has recently received direct funding from the European Union for developed as the open source environment for machine vision and medical image analysis research. For more details of TINA visit the website at http://www.tina-vision.net [tina-vision.net]
Sorry to rant a bit but it is not often I read something on here that I know so much about!
Vision and Height (Score:1)
Re:First! (Score:1)