Beta
×

Welcome to the Slashdot Beta site -- learn more here. Use the link in the footer or click here to return to the Classic version of Slashdot.

Thank you!

Before you choose to head back to the Classic look of the site, we'd appreciate it if you share your thoughts on the Beta; your feedback is what drives our ongoing development.

Beta is different and we value you taking the time to try it out. Please take a look at the changes we've made in Beta and  learn more about it. Thanks for reading, and for making the site better!

Combining Two Kinects To Make Better 3D Video

Soulskill posted more than 3 years ago | from the this-is-what-happens-when-yo dept.

Hardware Hacking 106

suraj.sun sends this quote from Engadget about improving the Kinect 3D video recordings we discussed recently: "[Oliver Kreylos is] blowing minds and demonstrating that two Kinects can be paired and their output meshed — one basically filling in the gaps of the other. He found that the two do create some interference, the dotted IR pattern of one causing some holes and blotches in the other, but when the two are combined they basically help each other out and the results are quite impressive."

Sorry! There are no comments related to the filter you selected.

Well taht is (2, Funny)

Anonymous Coward | more than 3 years ago | (#34386808)

awesome

Two eyes are better than one (0)

anss123 (985305) | more than 3 years ago | (#34386828)

It's amusing how Kinect needs four microphones + calibration to replicate a feat we humans only need one ear for. To see 3D it apparently have to send out infrared dots, and even then it probably does a worse job than the good ol' brain.

Re:Two eyes are better than one (0)

Anonymous Coward | more than 3 years ago | (#34386840)

What feat would that be that one stationary ear could do as well as kinect?

Re:Two eyes are better than one (4, Funny)

anss123 (985305) | more than 3 years ago | (#34386864)

What feat would that be that one stationary ear could do as well as kinect?

Recognize your voice from the kitchen

Re:Two eyes are better than one (1, Interesting)

Anonymous Coward | more than 3 years ago | (#34387292)

Ear's can't do that, that's the brain. And for the kinect that would be the program in the device it connects to.

Re:Two eyes are better than one (0)

Anonymous Coward | more than 3 years ago | (#34387390)

Why the apostrophe for the plural of ear? And why no apostrophe for "connects"?

Re:Two eyes are better than one (1, Funny)

Anonymous Coward | more than 3 years ago | (#34387548)

And why no apostrophe for "connects"?

Because that's not a plural. What do you think I am, an idiot?

Re:Two eyes are better than one (2, Funny)

Combatso (1793216) | more than 3 years ago | (#34388080)

I do, kinda.

Re:Two eyes are better than one (2, Insightful)

Anonymous Coward | more than 3 years ago | (#34389806)

I do, kinda.

Well, your wrong. "Connects" is a verb, and everybody knows that even plural verbs do not get apostrophe's. Sheesh man, do some research.

Re:Two eyes are better than one (1, Insightful)

Anonymous Coward | more than 3 years ago | (#34398000)

Thi's is a brilliant thread.

Re:Two eyes are better than one (2, Interesting)

fuzzyfuzzyfungus (1223518) | more than 3 years ago | (#34386886)

There is a class of visual inputs that makes the human brain just tie itself in knots, even once you know that the trick is, "optical illusions", Escher stuff, and the like.

I wonder what the class of "optical illusions" for the Kinect's vision system and algorithms is... Off the top of my head, I'd imagine that retroreflective materials might kind of freak it out; but I'd be curious to know if there are any stimuli that cause it to wig out in weird ways, the way that optical illusions do the human visual system.

Re:Two eyes are better than one (1)

anss123 (985305) | more than 3 years ago | (#34386936)

I wonder what the class of "optical illusions" for the Kinect's vision system and algorithms is

I'm guessing kinect makes assumptions based on common human bone structure, e.g. something like a dog might freak it out and make it explode.

Re:Two eyes are better than one (1)

JuzzFunky (796384) | more than 3 years ago | (#34396768)

Check out this video where a guy has taken the model from the kinect and replaced the points with variable sized blobs. Looks cool. Fat Cat [vimeo.com]

Re:Two eyes are better than one (2, Interesting)

EdZ (755139) | more than 3 years ago | (#34387350)

As the video demonstrates, the Kinect is fooled by spurious pattern projections from other Kinects in the vicinity. This could be solved by replacing the IR source in the 'projector' (actually a point source and a pinhole grid) with one of a different wavelength, and adding appropriate filters to the IR cameras in each Kinect. Each Kinect would then only see IR light of the 'colour' it emits. This would probably require the use of slightly brighter IR emitters.

Re:Two eyes are better than one (1)

jrobot (1239050) | more than 3 years ago | (#34394546)

CMOS sensors light gathering capabilities fall off over increasing wavelength.
Silicon's quantum efficiency at NIR is much lower than visible. There's not a
huge range of NIR to play in without QE falling off.

IR diodes don't emit light over a single wavelength. Not only do they shift long with
temperature, but the rated wavelength is really an average of the range the wavelength
drifts over.

Very tight bandpass filters tend to drift shorter in wavelength off axis.

Re:Two eyes are better than one (1)

EdZ (755139) | more than 3 years ago | (#34402114)

The Kinect uses an LED laser, so is truly monochromatic. You'd need some very narrow band-pass filters, but these are available, albeit sometimes bespoke.

Re:Two eyes are better than one (1)

Traksius Egas (12395) | more than 3 years ago | (#34396770)

This could be solved by replacing the IR source in the 'projector' (actually a point source and a pinhole grid) with one of a different wavelength, and adding appropriate filters to the IR cameras in each Kinect.

Or maybe timing the grid light to be off while the other camera is on and vis-versa. Alternating back and forth quickly like the 3D LCD 'shutter' lenses. This way the grids would not interfere with each other. Don't have to turn off the cameras, just don't use the 3D grid data from those frames where the opposite grid is being used.

Just my .01 worth. :)

Re:Two eyes are better than one (1)

MrQuacker (1938262) | more than 3 years ago | (#34386912)

Think about it; the Kinect is given a job most of us would laugh out of town. Build a sophisticated camera capable of full 3-D input and peripheral pickup, using only water and jelly. Build an eye and ears.

We don't know how to use jelly yet, so we settle with plastic and metal.

Still a crazy task.

Re:Two eyes are better than one (0)

Anonymous Coward | more than 3 years ago | (#34391222)

Think about it; the Kinect is given a job most of us would laugh out of town. Build a sophisticated camera capable of full 3-D input and peripheral pickup, using only water and jelly. Build an eye and ears.

We don't know how to use jelly yet, so we settle with plastic and metal.

Still a crazy task.

Nobody touch this post! I'm having it for my fucking dinner!

Re:Two eyes are better than one (5, Insightful)

JanneM (7445) | more than 3 years ago | (#34386982)

The "good ol' brain" does a fairly crappy job, actually. 3D vision systems like these tend to perform quite a bit better than we do. And we only do as well as we do because we can use a lot of indirect clues based on our long experience with a 3D-world - we know how big stuff normally is, for instance, so we can judge distance from size. Mess up those clues and we completely lose it.

And even with good clues we don't actually measure distance well. Have somebody place items on a parking lot or some place like that, then try to guess the distances. Not going to be very accurate. Try to estimate distance vertically rather than horizontally and you'll do even worse; you have fewer clues and less experience to fall back on.

Re:Two eyes are better than one (0)

Anonymous Coward | more than 3 years ago | (#34387086)

Ha! I get the joke about judging heights; less to fall back on!

Re:Two eyes are better than one (1)

L4t3r4lu5 (1216702) | more than 3 years ago | (#34387136)

I would say both would be very accurate, considering no actual measuring would be taking place. You can extrapolate points of reference:

- A car is approx 3m long 2m wide. A parking space is about same
- The lanes between spaces are 2 cars wide, to allow for idiots who can't follow the arrows.
- Basic trig can give you any distance in a parking lot.

The same applies to buildings. The average person is 6' tall, with 18" spare to the roof. The floor space is approx 6", making each floor approx 7'. Multiply $floors by 7. For offices, assume false ceilings; 9' per story.

This does go back to your "time spent in the 3D world" though. If we had no point of reference, yes we'd suck. However, we do, so we don't.

Re:Two eyes are better than one (1)

SuiteSisterMary (123932) | more than 3 years ago | (#34387246)

Which is exactly what the parent said. Besides, look at it this way. You're using cars as an overlay grid. The Kinect is using a dot patter projected in infrared. What's the difference? Or, if you were to go to an empty grassy field, how would you distance estimates do?

Re:Two eyes are better than one (1)

The_mad_linguist (1019680) | more than 3 years ago | (#34387296)

You can get training if you really care.

Re:Two eyes are better than one (1)

L4t3r4lu5 (1216702) | more than 3 years ago | (#34387424)

Rugby pitches. 100m long. Divide or multiply as required. Plus, a healthy background in outdoor pursuits gave me a good eye for horizontal distance.

Plain buildings a la MiniPeace. however, would throw me completely.

Re:Two eyes are better than one (1)

TDyl (862130) | more than 3 years ago | (#34387278)

And even with good clues we don't actually measure distance well.

Yep, just look at the quarterback for the Carolina Panthers.

Re:Two eyes are better than one (1)

Abstrackt (609015) | more than 3 years ago | (#34387362)

Your comment reminds me of an interesting experiment you can do in 2D. Show people a page containing nothing but a creature that can't possibly exist and ask how big it is, obviously there's no way to answer without scale. If you put a picture of an elephant next to the creature it looks huge but if you put a picture of a mouse next to it the creature looks small.

Re:Two eyes are better than one (0)

Anonymous Coward | more than 3 years ago | (#34387394)

You'd better start dreaming if you think that the brain does a crappy job. If in this world the brain may seem to be doing some crappy job is due to the optimizations being made for the REAL life, perhaps these values you refer while being very important at specific cases (such engineering) weren't find that much important over other aspects of the real world.

Re:Two eyes are better than one (1)

VShael (62735) | more than 3 years ago | (#34387622)

That's typically a product of training. We don't have much experience with it, because we don't need it.

But take an aborigine, and ask him to estimate how far something is, and you'll get a good accurate answer, even if it's not in feet and inches.

Re:Two eyes are better than one (1)

Missing.Matter (1845576) | more than 3 years ago | (#34387888)

And even with good clues we don't actually measure distance well. Have somebody place items on a parking lot or some place like that, then try to guess the distances. Not going to be very accurate.

And yet we are able to navigate and interact with our environment with a high degree of precision. When I'm driving a car, for instance, without looking at how fast I'm going, knowing distances, the weight of the car, my acceleration and deceleration capabilities, I'm able to stop at a line painted on the road to within half a meter. Just with my eyes!

I work with robots, and even knowing all this information to a high accuracy, there is so much work that needs to be done with localization, navigation, planning, etc. to get it to mimic my performance. The robot must be equipped with laser range finders, wheel encoders, global positioning systems, and an array of other sensors. If only I could slap a vision system on it and call it a day. Whatever the human brain is doing under the hood, it's incredibly sophisticated. We're bad at estimating distances because we don't need to.

Re:Two eyes are better than one (1)

JanneM (7445) | more than 3 years ago | (#34388066)

And yet we are able to navigate and interact with our environment with a high degree of precision.

Yes, we are. Our vision system is pretty successful when you look at how we actually use it in the real world. We don't actually need to know the precise distance to things; what we want to know is rather direction and time to impact and similar and we're really, really good at that (look up tau-margin estimation for instance). Though note that with a human-level vision system you would still need a lot of those sensors you talk about. Our vision system absolutely depends on proprioception to figure out where we are in the world and compensate for our own movements; we need separate dead-reckoning systems and (again) a lot of experience to be even somewhat correct about our movements over large distances and so on.

But I wrote this in reply to a poster that seemed to believe we humans are actually better than Kinect at the specific vision tasks it's built to do. Too many people seem to believe that the mammalian vision system is inherently great, at whatever tasks we imagine, and that if we could only make something like it our machine vision problems would be solved. That is simply not the case.

Re:Two eyes are better than one (1)

anss123 (985305) | more than 3 years ago | (#34388468)

But I wrote this in reply to a poster that seemed to believe we humans are actually better than Kinect at the specific vision tasks it's built to do.

But we are better. Kinect is built to recognize faces and body postures, it’s not built to estimate the distance from you to the TV even if it can do that more accurately than we can.

Re:Two eyes are better than one (1)

drinkypoo (153816) | more than 3 years ago | (#34391252)

But we are better. Kinect is built to recognize faces and body postures, it’s not built to estimate the distance from you to the TV even if it can do that more accurately than we can.

That is a ridiculous statement. Kinect builds heightmaps. If that's not estimating the distance from you to the TV then I don't know what is. Kinect in fact does the other cool things it can do specifically because it is built to estimate the distance from you to the TV, when other camera systems are not. If this was ALL it would do you could still do the same stuff on the 360 in software, but it would take away from the available processing power which is why embedding it as a complete solution was the smart thing to do.

Re:Two eyes are better than one (1)

anss123 (985305) | more than 3 years ago | (#34397080)

I think you misunderstood me. Building height maps is just a means to an end; the end being figuring out just what you're doing with those limbs of yours. The Kinect wasn't created/built for the purpose of measuring objects, even if it's better at this than us humans.

Re:Two eyes are better than one (1)

drinkypoo (153816) | more than 3 years ago | (#34397472)

The Kinect wasn't created/built for the purpose of measuring objects, even if it's better at this than us humans.

That's a big fail of a response. The statement that prompted your original comment was "But I wrote this in reply to a poster that seemed to believe we humans are actually better than Kinect at the specific vision tasks it's built to do." and you said "But we are better. Kinect is built to recognize faces and body postures, it’s not built to estimate the distance from you to the TV even if it can do that more accurately than we can." But that is plainly false. Kinect is built to measure the distance from you to the TV, it has hardware specifically for this purpose. We do not have hardware specifically for this purpose; we infer the information from our existing sensors. So no, I understood your comment perfectly, and I simply disagree with everything you said.

Re:Two eyes are better than one (1)

anss123 (985305) | more than 3 years ago | (#34397888)

I simply disagree with everything you said

So you believe that Kinect is superior to humans at recognizing faces and body postures?

Re:Two eyes are better than one (0)

Anonymous Coward | more than 3 years ago | (#34386996)

I think what's most amusing is that Microsoft will be patting itself on the back soon for their recent "We meant to make it open" stance.

Oh Microsoft. The amount of good press you're getting from the Kinect is worth more than the money you'll be making selling the damned things.

Re:Two eyes are better than one (1)

Takichi (1053302) | more than 3 years ago | (#34387004)

You are essentially just comparing the brain to the computer. We would likely have better spatial resolution if we had more ears and eyes as well. And most of the capabilities of the ear, especially in regards to space, is learned based on the combination with other responses like vision and touch. If you lived your life from the beginning with your only sense being a single ear, you'd probably do worse than a Kinect unless someone explicitly taught you what the things you were hearing meant, if you could ever learn to understand them at all.

this-is-what-happens-when-yo (1)

L4t3r4lu5 (1216702) | more than 3 years ago | (#34386838)

u don't conform to the character limit for sub-headings?

Purpose of Headings (2, Insightful)

nschubach (922175) | more than 3 years ago | (#34388902)

Headings are for brief topic summaries (a few words.) Not content.

Anybody in optics? (4, Interesting)

fuzzyfuzzyfungus (1223518) | more than 3 years ago | (#34386860)

How cost and/or physics prohibitive would it be to exploit the fact that "IR" actually covers a number of frequencies of invisible-to-the-naked-eye light with similar properties? Could one modify a Kinect with appropriate narrow-band filters, so that a second Kinect, with filters for a different narrow band wouldn't even see the dot pattern of the first? If possible, how many Kinects would it be possible for(or, at what point does the required narrowness and wavelength tolerance requirements become absurdly costly?)

Is that A)Wholly impractical, because of some sort of effect the reflecting materials would have on the IR wavelengths, B)Sure, it's possible; but have you checked the supplier's price list for narrowband IR filters recently, or C)Just a bit of ebay and some steady hands?

Perhaps more practically, I wonder if the Kinects could(with some mixture of hardware shutters and firmware or driver mods) be made to trade off sample rate for coverage(ie. if the kinects are ordinarily taking 60 frames/second, could two kinects be made to take 30 frames/second each, turning off their IR source when it isn't their turn, and turning it on when it is) or does their mechanism of operation require too much time to calibrate itself on startup?

Re:Anybody in optics? (3, Informative)

Xelios (822510) | more than 3 years ago | (#34387146)

He touched on these ideas in another of his videos [youtube.com] from before this latest one.

Re:Anybody in optics? (2, Informative)

Vario (120611) | more than 3 years ago | (#34387164)

It is definitely possible to use some narrow bandpass filters. In the infrared region there are various filters for available that have a wavelength window of 10 nm at 1000 nm. These filters are not available at Walmart, but they are not too costly either. Depending on size, quality, wavelength and other parameters you should be able to buy some for $50 (Thorlabs).

To actually hack the Kinect you have to test, whether there are other infrared filters used and if the camera is sensitive enough at different wavelengths. I don't think the properties of the reflecting materials should be of any concern. The reflection of materials in a household room should not change for a small frequency difference in the infrared region.

Using a time-multiplex approach with shutters or just software which switches the cameras on and off might work well in theory but should be rather impractical to do without significant changes to the Kinect hardware.

Re:Anybody in optics? (1)

Ceriel Nosforit (682174) | more than 3 years ago | (#34387572)

Wouldn't polarized filters do the trick?

Re:Anybody in optics? (1)

slim (1652) | more than 3 years ago | (#34387710)

Wouldn't polarized filters do the trick?

As someone in another thread points out, polarity is lost when light is scattered as it reflects (3D cinemas have special screens).

Also, polarizing gives you two channels. Bandwidth selection gives you many.

Re:Anybody in optics? (1)

Mr Thinly Sliced (73041) | more than 3 years ago | (#34387722)

Not really as the surface absorbing the light has preserve the polarisation - and anyone who's setup a dual-projector 3D rig with polarised light can attest - you need a special surface coating to get good preservation of polarisation.

Paint with silver particles in it is typically used for painting 3D screens, for example.

Re:Anybody in optics? (1)

warmflatsprite (1255236) | more than 3 years ago | (#34388554)

The best way to do this would be to modify the firmware to include some kind of pseudorandom modulation scheme (think binary chip sequence). However, the processor on the Kinect is a PrimeSense proprietary ASIC. Good luck reverse engineering it.

Shuttering might work, but as you said, you'd reduce the overall framerate, meaning worse motion capture. Also you'd need to synchronize the shutters somehow, and that'd be a pain.

Filtering would change the sensitivity of the camera, but it won't do much to the laser. You'd have to swap it out for the specific band you're filtering to. Also, I'm pretty sure optical band pass filters aren't cheap.

My personal hope is that we see some kind of modulation in later versions of the device, either because Microsoft asks for it, or because PrimeSense just starts including it by default in their ASICs.

Re:Anybody in optics? (1)

damien_kane (519267) | more than 3 years ago | (#34395790)

Why couldn't you filter both the diode and the camera?
I.e. (Normal/Today's world):
Diode 1 emits light across the entire IR-A spectrum (700 - 1400nm)
Camera 1 detects light across entire IR-A spectrum (700 - 1400nm)

Diode 2 emits light across the entire IR-A spectrum (700 - 1400nm)
Camera 2 detects light across entire IR-A spectrum (700 - 1400nm)


Apply filters to both emitter and detector, on both Kinect 1 and 2:
Diode 1 emits light across the entire IR-A spectrum (700 - 1400nm), filter is applied so that only 700-800nm wavelengths actually leave the kinect (all other absorbed, for the most part)
Camera 1 detects light across entire IR-A spectrum (700 - 1400nm), filter is applied so that only 700-800nm wavelengths enter the kinect (all other absorbed, for the most part)

Diode 2 emits light across the entire IR-A spectrum (700 - 1400nm), filter is applied so that only 900-1000nm wavelengths actually leave the kinect (all other absorbed, for the most part)
Camera 2 detects light across entire IR-A spectrum (700 - 1400nm), filter is applied so that only 900-1000nm wavelengths enter the kinect (all other absorbed, for the most part)


Granted, the filter won't stop 100% of the other wavelengths, but it'd probably mute them enough that it's not detectable (or less so).
Some recalibration may be required, but other than that it should just work.

Re:Anybody in optics? (1)

warmflatsprite (1255236) | more than 3 years ago | (#34396500)

It depends on the laser diode they're using. If it emits a wide enough band of light, then sure, it can be filtered. I just doubt that it does.

So wont 3 Kinects make 3D video? (1)

MrQuacker (1938262) | more than 3 years ago | (#34386884)

So wont 3 Kinects make 3D video?

Re:So wont 3 Kinects make 3D video? (1)

arshadk (1928690) | more than 3 years ago | (#34387024)

You can't have 6 minute abs! 3 or 4 could make for a pretty cool image all the way round. I wonder if this could be paired with CAD software and a 3D printer.

Re:So wont 3 Kinects make 3D video? (1)

TuringTest (533084) | more than 3 years ago | (#34387080)

I'd expect that 3 Kinects would make 4D video.

Re:So wont 3 Kinects make 3D video? (1)

noidentity (188756) | more than 3 years ago | (#34387166)

I'd expect that 3 Kinects would make 4D video.

What about four Kinects? Oh man, 5D... they'd like create a new dimension!!!

Re:So wont 3 Kinects make 3D video? (1)

Mister Whirly (964219) | more than 3 years ago | (#34390176)

No, the 5th Dimension already exists. [wikipedia.org]

This is the dawning of the Age of Aquarius!

Re:So wont 3 Kinects make 3D video? (1)

clyde_cadiddlehopper (1052112) | more than 3 years ago | (#34390296)

Yep, except for that darned "light only travels in one direction in time" problem... sigh.

Re:So wont 3 Kinects make 3D video? (1)

MorpheousMarty (1094907) | more than 3 years ago | (#34390496)

So wont 3 Kinects make 3D video?

I get what you're saying. With 3 of these you should be able to get x,y,z coordinates. However, each of these is capable of getting the x,y,z for surfaces facing the camera, the problem is you need to hit all the surfaces. With 6 Kinects to cover front, back, left, right, top and bottom you could probably have the best coverage, but I expect four of them, one in each corner of the room like security cameras, would provide similar results.

This is immense... (1)

L4t3r4lu5 (1216702) | more than 3 years ago | (#34386888)

This makes for real 3D movies. Capture the streams from both sources, combine in real time in the viewer, and you're able to change your PoV and focus independently of any other observer.

This is revolutionary for entertainment. Not stereoscopy.

Re:This is immense... (0, Flamebait)

nicholas22 (1945330) | more than 3 years ago | (#34387054)

I think this is similar to what James Cameron did for the (bad) movie called Avatar.

Re:This is immense... (2, Interesting)

L4t3r4lu5 (1216702) | more than 3 years ago | (#34387098)

It's a mixture of the two. He used two cameras to film the live action scenes, but the output was reduced to stereoscopic 3D on the screen.

This is actual 3D on the screen, like a 3D game. You can't zoom in, or even focus, on the background in Avatar. In fact, attempting it gave me a massive headache. With this true 3D rendering of an object, you can zoom, focus, and more importantly pan around objects in the scene, in real time. That is the breakthrough this hack has brought about.

Re:This is immense... (2, Informative)

slim (1652) | more than 3 years ago | (#34387140)

With this true 3D rendering of an object, you can zoom, focus, and more importantly pan around objects in the scene, in real time.

Er, if neither of the Kinect cameras is focused on the background, then it's going to be blurry no matter what.

Assuming we're talking about a recording, you'd be able to move the virtual camera, but you wouldn't be able to bring things into focus that were not in focus in the recording.

What this gives you is a 3D model, with an many textures mapped onto it as there are cameras.

Re:This is immense... (1)

L4t3r4lu5 (1216702) | more than 3 years ago | (#34387156)

True, thanks for the clarification. Ok, you won't get to focus, but you will get to pan. That's awesome with a capital sweet.

Re:This is immense... (1)

tibit (1762298) | more than 3 years ago | (#34389622)

You miss on how much more expensive the non-CG movies would have to be to allow this. In many sets, if you'd move the camera just a bit outside of what it views, you'd see all of the production equipment, other people, etc. In classical 2D and stereoscopic (market-speak 3D) filming on a set, you only build enough of an expensive set to let you film what's in the screenplay. Anything more is a waste.

Same goes for 3D CG movies: no point in making the character and scene models any more detailed/extensive than they need to be -- it all requires work: either for modelmaking, or to develop software to procedurally generate the model (usually for environments).

Re:This is immense... (1)

drinkypoo (153816) | more than 3 years ago | (#34391298)

I thought that the Kinect camera was fixed-focus with infinite DoF? In that case, you can emulate focus by blurring everything not in focus in the final scene. Of course, everything will be equally out of focus...

Re:This is immense... (1)

grumbel (592662) | more than 3 years ago | (#34391726)

Er, if neither of the Kinect cameras is focused on the background, then it's going to be blurry no matter what.

Unless you want to do extreme close ups, focus isn't much of an issue, as the depth-of-field of any webcam and things like Kinect is rather large. The blurring of the background you have in cinema, doesn't happen by accident, but by design, if you just point your regular webcam at the scene you wouldn't get that. The bigger problem would be resolution, as anything further away would naturally have an ever lower resolution then stuff in the foreground. So you couldn't freely float around a room without things getting mushy, but panning around an object should be perfectly fine.

Re:This is immense... (1)

slim (1652) | more than 3 years ago | (#34387108)

Only for a very broad kind of "similar".

The live action parts of Avatar would have been filmed using traditional stereoscopic techniques; two cameras imitating two eyes.

The CG elements would have been traditional CG; models created by a combination of artists and 3D scans.

The CG animation would have been motion capture as used by Peter Jackson and countless video games: multiple cameras tracking reflective points attached to an actor's body.

Re:This is immense... (0)

Anonymous Coward | more than 3 years ago | (#34387160)

This makes for real 3D movies. Capture the streams from both sources, combine in real time in the viewer, and you're able to change your PoV and focus independently of any other observer.

What? There are no focus controls on this. Knowing the depth won't magically allow you to recover an unblurred image, it's not a light field capture device. You won't be able to virtually refocus, it still falls short of "real 3D".

That's nothing... (2, Funny)

Anonymous Coward | more than 3 years ago | (#34386904)

Can you imagine a beowulf cluster of kinects??

Re:That's nothing... (0)

Anonymous Coward | more than 3 years ago | (#34387000)

http://www.breezesys.com/MultiCamera/index.htm [breezesys.com] all around your "bullet time" 3d Kinetic subject?

Re:That's nothing... (1)

JustOK (667959) | more than 3 years ago | (#34387082)

or Natalie Portman with hot kinects?

Fun, but expected (0, Redundant)

paxcoder (1222556) | more than 3 years ago | (#34386906)

This [slashdot.org] , however is awesome.
It's not for realtime moving objects, but does its job perfectly.

Re:Fun, but expected (0)

Anonymous Coward | more than 3 years ago | (#34386954)

You just linked to the same ./ article that you're posting on...

Re:Fun, but expected (2, Funny)

samjam (256347) | more than 3 years ago | (#34388052)

He's not the only one. My depth-first recursive post counter has found hundreds of such posts.

Home Survivelance (1)

NBolander (1833804) | more than 3 years ago | (#34386974)

Am I the only one imagining getting a Kinect or two in every room of their home and then use it to fly through the 3d video feed of their apartment?

Re:Home Survivelance (1)

mrsurb (1484303) | more than 3 years ago | (#34387414)

Plus potential advertisers [gearlog.com] will be able to try to sell you what they know you don't own!

Re:Home Survivelance (1)

NBolander (1833804) | more than 3 years ago | (#34388136)

No need to export the video feed to them as I just want the 3d video stream for myself and perhaps a few selected friends.
Would be cool to insert avatars from several people there though. A 3d video version of the old MOO concept, or a local second life with a live video background to use a more modern analogy.
Bandwidth will be an issue however.

X-Ray machines (1)

MrDoh! (71235) | more than 3 years ago | (#34387010)

With all this stuff in the news recently about backscatter machines and the need for improved x-ray machines, this sort of system would be fantastic for improving the quality of screening, being able to look in and see depth in luggage.

2 Kinects, 1 Box... (4, Funny)

bhunachchicken (834243) | more than 3 years ago | (#34387066)

... is good, but I'm holding out for 4 Girls, 3 Kinects, 2 Boxes, 1 Cup :)

Re:2 Kinects, 1 Box... (2, Funny)

Anonymous Coward | more than 3 years ago | (#34387562)

...and a partridge in a pear tree?

Re:2 Kinects, 1 Box... (0)

Anonymous Coward | more than 3 years ago | (#34387750)

2 girls 1 cup, idiot.

Re:2 Kinects, 1 Box... (2, Funny)

acoustix (123925) | more than 3 years ago | (#34387706)

... is good, but I'm holding out for 4 Girls, 3 Kinects, 2 Boxes, 1 Cup :)

Correct me if I'm wrong, but if there's 4 girls then wouldn't there also be 4 boxes?

Just sayin'...

Re:2 Kinects, 1 Box... (0)

Anonymous Coward | more than 3 years ago | (#34389418)

Depends on how many of the girls are "girls"

Re:2 Kinects, 1 Box... (1)

g1zmo (315166) | more than 3 years ago | (#34393912)

And a jar.

Polarizing filters! (3, Insightful)

John Pfeiffer (454131) | more than 3 years ago | (#34387174)

When I first saw the video of one Kinect, I immediately wondered how you could get multiple units working together.

It wasn't until I watched the video again later that day that it hit me. I had just explained to someone how 3D theater projection works, and so I had an epiphany: The most sensible course is to use polarizing filters.

With filters on the IR emitters and cameras, the units should be able to only see their own IR illumination. Of course, it would only work for two Kinects with maximum effectiveness, but considering how well this turned out with the units at right-angles from each other, I don't see why you couldn't combine the two ideas for 3-4 units and get sufficient quality.

I wish I had the money to get a couple Kinects and test my idea, but I'm no good with coding anyway.

It'd be awesome to see the Blender Foundation put out a bounty for a Kinect-based open source motion capture and 3D scanning suite though. :D

Re:Polarizing filters! (4, Informative)

Anonymous Coward | more than 3 years ago | (#34387340)

Unfortunately, this wouldn't work very well. Light tends to lose its polarization somewhat when it bounces off of things. In a theater that's OK because you can use a special screen that maintains the polarization. Band limiting each kinect would be more effective than polarization (and would also scale better - polarization only allows for 2 kinects; the bandpass idea would only be limited by how good your filters are).

Re:Polarizing filters! (0)

Anonymous Coward | more than 3 years ago | (#34394062)

but you could use left- right-hand circular polarization, not sure how you would do that on the cheap though.

Re:Polarizing filters! (0)

Anonymous Coward | more than 3 years ago | (#34387484)

Does scattering light still preserve polarization? If not, then it won't work.

Re:Polarizing filters! (1)

MikeBabcock (65886) | more than 3 years ago | (#34388182)

Light is re-polarized when bouncing off of things, that's why people wear polarized sunglasses; it eliminates glare.

Unfortunately, you wouldn't be able to predict the resulting polarization with great confidence off of curved surfaces at strange angles like bodies have.

Re:Polarizing filters! (1)

tibit (1762298) | more than 3 years ago | (#34389846)

I'd do it in a different way that may well be lower cost and more scalable than any wavelength- or polarization-based selectivity.

1. Run the Kinects off a common reference frequency. The onboard circuitry probably uses one crystal oscillator and PLL-controlled VCOs to generate various derivative frequencies to time everything. A common reference will keep all Kinects phase-synchronized, while the phase itself may well be random.

2. Figure out how to discover the phase angle when the IR camera shutter is open (vs. reference frequency), and figure out how is this angle initialized. Is that fixed at power-up, or is it related to the timing of the USB commands? All we care for is to know when the IR camera is sensitive to light, and how to control the initial reference phase for that. It may even be that the IR light is strobed for thermal or power management -- that would need to be in sync with the camera. Anyone out there with a Kinect and a scope to probe the drive voltage to the IR illuminator? Or perhaps the camera is a separate chip and there simply are clock lines to be sniffed out. In any case this should be fairly easy to do.

3. Set up a simple mechanical sector shutter for each Kinect's camera. A symmetrically notched CD glued to a spindle assembly from dead CD or DVD drive will do just fine. The illuminator can be chopped electrically.

4. Add an optical interrupter sensor to each shutter for feedback, and run a PLL on a microcontroller to keep the mechanical shutter in phase sync to the IR camera's electronic shutter.

Since we can control the camera shutter phase and can keep those phases synced across multiple Kinects, we can do time division multiplexing. With very many Kinects one needs to increase the power to the illuminator; perhaps moving to a higher-powered IR LED or laser diode as a light source. The camera shouldn't be a problem -- heck, it will become less sensitive to background IR as the time division gets shorter.

Alternating refresh rates? (0)

Anonymous Coward | more than 3 years ago | (#34391044)

Just like those clunky 3D glasses and your old CRT. Have one kinect's IR grid and camera set at a refresh rate that's offset from the other so that they can't see each other's grids. Of course I'm assuming that the IR projector even HAS a refresh rate. Just thinking.

Re:Alternating refresh rates? (0)

Anonymous Coward | more than 3 years ago | (#34391446)

His followup http://www.youtube.com/watch?v=YB90t5Bssf8&feature=related addresses the different options. Polarizing the laser, syncing shutters, etc.

Yo dawg... (0)

Anonymous Coward | more than 3 years ago | (#34387324)

I herd you like Kinects so I put a Kinect on each Kinect so you could move your arms while you were moving your arms.

3D Scanner (1)

necro81 (917438) | more than 3 years ago | (#34387356)

The results look an almost identical to the kind of data I get from the NextEngine [nextengine.com] 3D laser scanner. To create a 3D surface, the device sweeps a laser across the object in front of it. The laser sweeps a vertical line, and shines on the (arbitary) surface of the object in front of it. Stereo cameras capture the shape of the laser line from different angles, and software is able to extract the 3D surface from there. An accompanying visible light image from one camera or the other is used to apply a "skin" to what is otherwise a wireframe. By using a laser and taking its time, rather than broadcasting an infrared grid of fiducial dots, the results are very good: sub-millimeter accuracy is easy, though for handheld objects, not people in a room. Similar technology can be used for very large scale models, such as the I-35W bridge [profsurv.com] collapse in Minneapolis [usgs.gov] .

Oblig. (1)

lloydsmart (962848) | more than 3 years ago | (#34387536)

Wow, imagine a Beowulf.... blah blah.

In other words... (1)

asCii88 (1017788) | more than 3 years ago | (#34387596)

1. Find YouTube channel with worthy content
2. Subscribe
3. Share new videos on Slashdot
4. ????
5. PROFIT!

Re:In other words... (0)

Anonymous Coward | more than 3 years ago | (#34387934)

Yeah, but it's step #1 that is a bitch.

Basic Webcam (1, Informative)

jgtg32a (1173373) | more than 3 years ago | (#34387920)

Ya know to the best of my knowledge you cannot use the Kinect as a webcam in Skype. I would love to buy a Kinect but I need a reason other than awesome tricks, I need useful functionality.

Be great for homemade pr0n (1)

bemenaker (852000) | more than 3 years ago | (#34388400)

Just wait for the flood of homemade 3d pr0n :) (hey somebody had to say it)

Re:Be great for homemade pr0n (1)

vgerclover (1186893) | more than 3 years ago | (#34389274)

Actually, it seems likely to me that geeky (as in geeks will be the ones doing it) amateur 3D porn will become commonplace before commercial 3D porn.

this+HUD (1)

mugnyte (203225) | more than 3 years ago | (#34391962)

  Given a quality enough image, bandwidth, and some motion-sensing gear (ahem), any immersion-style display (HUD, dome, etc) could allow for real-time panning of a distant location.

  Examples:
    - shooting a net of these at an operating table would let remote viewers move around the room and view the procedure without crowding the room or limited to the perspective of the single camera.
    - a web site could point this setup at anything interesting (lab experiment, box of puppies, anthill, construction site, political debate) and stream it live for an amazing viewer-decided perspective.
    - live news could mount an array of this setup to a vehicle and capture a modeled view of anything they could reach, then pan around without much camera work.
 

Load More Comments
Slashdot Login

Need an Account?

Forgot your password?