Beta
×

Welcome to the Slashdot Beta site -- learn more here. Use the link in the footer or click here to return to the Classic version of Slashdot.

Thank you!

Before you choose to head back to the Classic look of the site, we'd appreciate it if you share your thoughts on the Beta; your feedback is what drives our ongoing development.

Beta is different and we value you taking the time to try it out. Please take a look at the changes we've made in Beta and  learn more about it. Thanks for reading, and for making the site better!

Video with Depth

michael posted more than 12 years ago | from the 3d-glasses dept.

Graphics 110

Lifewolf writes: "A new technology from 3DV Systems uses pulsed infrared illumination to capture depth information for every pixel of a video stream. This allows for neat tricks like realtime keying without need for color backgrounds. JVC is already selling a product based on this, the ZCAM."

cancel ×

110 comments

Sorry! There are no comments related to the filter you selected.

Some pot? (-1)

October_30th (531777) | more than 12 years ago | (#2978759)

Goddamn I'm bored.

I really wish I had some pot to smoke.

Re:Some pot? (-1, Offtopic)

Anonymous Coward | more than 12 years ago | (#2978765)

here, try some of mine... it's called VirtualMindFuck...

Re:Some pot? (-1)

Fucky the troll (528068) | more than 12 years ago | (#2978768)

I'm also bored. Time to cook breakfast I think.

now where did I put those kittens?

Re:Some pot? (-1)

October_30th (531777) | more than 12 years ago | (#2978856)

Got an automated SMS from the work. One of the arsene detectors had been triggered in the lab and I had to drive all the way there and wear a damn breathing apparatus just to find out that was a false alarm again.

Good thing I hadn't smoked pot and that it wasn't a real emergency. It might have been embarrasing to explain to the arriving emergency services that "Yeah, I'm in charge of the lab" with my eyes red and clothes reeking of pot.

my first first post (-1, Troll)

Anonymous Coward | more than 12 years ago | (#2978760)

first post!

well, it would have been, but i've been caught by the 20 second timer about 4 times.. the last time at 19 seconds.

Modeling applications? (2)

Wavicle (181176) | more than 12 years ago | (#2978772)

This opens up some great possibilities for
digitizing 3D models. Anybody heard of this
technology already being used for that?

p0rn? (0)

Anonymous Coward | more than 12 years ago | (#2978776)

it's a bit bothering that you think someone's filming you with a camera.. but secretly making a 3D model of you! =)

sp! (-1, Offtopic)

Anonymous Coward | more than 12 years ago | (#2978797)

yo! everyone knows it's "pr0n" and not "p0rn"...

c'mon, toss that thing into gear!

Re:Modeling applications? (2, Interesting)

Comrade Pikachu (467844) | more than 12 years ago | (#2978921)

There are already some optically based 3d scanners on the market. The first ones used a scanning laser beam to trace out a line that described an object's surface texture. More recent versions use a purely optical method (I think).

This system could probably be used for modeling by placing a physical model on a turntable and recording its changing z-depth over time. I wonder how accurate it is at close range. This could be really useful for architects who want to develop a 3D site plan. Simply snap a few shots at the building site, construct a DXF file based on the depth information, and import it into your CAD software.

The camera is probably intended for use with compositing applications like Shake, which can process z-depth information, as well as RGB, and alpha. Great for seamlessly integrating live action with computer generated 3D, particularly realtime 3D

This also poses the question: what other types of useful information can a digital camera acquire, if we are not limited to the visual spectrum? Would it be possible to extract diffuse color, reflected color, transparency, or other "ray depth" information from real life subjects?

Re:Modeling applications? (0)

Anonymous Coward | more than 12 years ago | (#2979402)

Some laser-based method was used to create a 3D-model with textures of some cave. It was about saving some cave paintings. I really can't remember the whole story.

Re:Modeling applications? (1)

Comrade Pikachu (467844) | more than 12 years ago | (#2980377)

A few years ago there was an exhibit at Epcot (Disney World) showing a 3D computer model of the interior of the Lascaux caves in France. Perhaps that is what you are thinking about. Pretty sophisticated 3D graphics for its time.

Re:Modeling applications? (1)

Queer Boy (451309) | more than 12 years ago | (#2981008)

The camera is probably intended for use with compositing applications like Shake, which can process z-depth information, as well as RGB, and alpha. Great for seamlessly integrating live action with computer generated 3D, particularly realtime 3D

This is kind of offtopic, but interesting nonetheless. Apple recently bought Nothing Real [nothingreal.com] , the company that makes Shake and Tremor.

Can you say Final Cut Pro 4?

What's so difficult? (2, Interesting)

evilviper (135110) | more than 12 years ago | (#2978774)

I've never really seen what makes 3D video (or 4D to get particular) so difficult to record.

Humans have 2 eyes in the front of their heads, inches apart. All that is needed in a camera is for two syncronized tapes to run simultaneously, with the lenses just a few inches apart.

Playback the left half on the left eye, the right half on the right eye, and our own built-in systems have no problem building those two images into a single 3D image.

I think the difficulty is not in the recording of 3D information, but of building a display to play it back to multiple people.

Re:What's so difficult? (2, Informative)

molekyl (152112) | more than 12 years ago | (#2978786)

You're mixing things up.

This is not an attempt at 3D-video. This is video with depth information.

It's primary application is to select parts of the image that you want to replace ('keying'), nothing else.

Re:What's so difficult? (2)

evilviper (135110) | more than 12 years ago | (#2978844)

I admit it's not solidly on-topic, but I am not confused.

Re:What's so difficult? (0)

Anonymous Coward | more than 12 years ago | (#2980244)

> This is not an attempt at 3D-video. This is video with depth information.

right.

The important feature here is that a discreet z-value is recorded for each pixel. This is
useless for a human interface, where depth information must be presented stereoscopically.

This -is- significant however for a machine interface. depth-based keying of one video image
over a background image is only one immediate application.

Now think of machine vision applications. This eliminates the need for complex processing of stereoscopic camera images just to synthesis the depth information. Think, depth-based object discrimination. Think, little robots that don't bump into things. Think, little robots that follow things.

I wonder what the practical limits are, precision, depth of field, etc.

I am not an engineer.
Send $10, or kill me.

Re:What's so difficult? (-1)

Fucky the troll (528068) | more than 12 years ago | (#2978790)

I have seen such an invention. The two images are projected onto a large screen (usually curved to 180 degrees) as different colours. The spectators wear special devices over their eyes that magically change the colours that each eye can see. These two factors combine to make a great 3d moving picture, or "movie".

You should be able to see such wonders at your nearest science fair or theme park.

Twofold problem (1)

aepervius (535155) | more than 12 years ago | (#2978791)

Part of it maybe that what you record stereoscoply you have to also playback with a stereoscope system (sorry for the spelling I am not english). Many system are/were tryed on various medium(blue/red glasses on TV, one frame over two on computer etc..). But every of those system have pro and contra (cost, quality, easy or nopt to use etc...). So in effect the problem looks easy but isn't (like the problem of path minimalisation, or even the "knot" problem). Furthermore I am not a biologue , but as far as I discussed with one there is another problem : eye aren't alone for us Human. The brain superpose a correction on what we see. Object it recognize it doesn't see them as "flat" even if seen with only one eye. It automatically add depth. Or something like that. Feel free to correct me as I am speaking out of my domain of expertise (Quantum Physic :))

Re:Twofold problem (3, Interesting)

evilviper (135110) | more than 12 years ago | (#2978862)

Like I said, there is no problem recording the image in 3D. The problem seems to be playing it back to an audience easially.

The brain superpose a correction on what we see. Object it recognize it doesn't see them as "flat" even if seen with only one eye. It automatically add depth.


True, but what most people don't realize is that we see just as much depth in a TV screen, as we would in real life if we covered one eye.

Speaking of complex problems... There are certain devices that, when placed over your eyes, will essentially trick your eyes into seeing the depth on a flat screen, so there is quite a lot of information saved on a 2D image. The strange thing is that computer generated images are still seen as flat, while the rest has depth. What is different in the two is a mystery, but it just goes to show that our minds are privy to much more information than we are consciously aware of. (Have you ever seen a movie which used special effects and it just didn't seem right, even through you couldn't point out any real problem?)

Re:Twofold problem (2)

Elwood P Dowd (16933) | more than 12 years ago | (#2979093)

True, but what most people don't realize is that we see just as much depth in a TV screen, as we would in real life if we covered one eye.


Remember, a strong queue for 3D perception does not require two eyes: Moving your head just slightly gives you stereo vision over time. Sometimes you can't get the same thing from a steadicam shot.

Re:Twofold problem (2)

mskfisher (22425) | more than 12 years ago | (#2979343)

Yep. I'm monocular (due to surgery to correct crossed eyes), though I retain use of both of my eyes. (actually, I can even control which is my dominant/active eye, which allows me to perform rudimentary stereo checks, if only to amuse myself.)
I do gain a lot of information from motion.
At the same time, starfield simulations and the like (if done properly, refresh rate, etc) can really draw me in.

Re:Twofold problem (0)

Anonymous Coward | more than 12 years ago | (#2979556)

It's called depth of field and is caused by the limited focal range of a camera.
Depth of field is easily added to computer generated images during rendering but is rarely done as it's a pain act as a virtual focus puller at the same time as being a virtual camera man.

Re:Twofold problem (2)

Supa Mentat (415750) | more than 12 years ago | (#2979856)

We really don't completely understand what you're talking about. It is true that with most people if you cover one eye you lose depth perception. But it doesn't have to be that way. A friend of mine is legally blind in one eye, he shouldn't have any depth perception, most people with his particular condition don't. Many years back he switched to a new optomotrist (sp?). When he went in for preliminary testing with this guy he got his depth perception tested, they had never done this at his first optomotrist's office, they assumed he didn't have any. The boy has perfect depth perception; he's one of the best tennis players in state. No one knows why and no one can offer any explanation other than his ONE working eye can do depth perception _by itself_. So it's a bit more complicated than anyone really knows. As a neurologist all I can tell you is that it's just another one of the many mysteries the brain presents us.

Re:Twofold problem (0)

Anonymous Coward | more than 12 years ago | (#2981078)

You can simulate depth perception with a knowledge of the actual size of objects.

Take a tennis ball and put it 10 meters from your friend. Say he tells us it's exactly 10 meters away.

Now take a perfect replica of a tennis ball, but at 1/2 scale, and put it 20 meters away.

Your friend will likely tell us that it's 10 meters away (assuming no other visual cues)

Re:Twofold problem (2, Interesting)

arakis (315989) | more than 12 years ago | (#2979967)

May I correct some common misconceptions about 3-dimensional optics vs. stereoscopic. 3-Dimensional light is based on a wave of photons traveling through a volume of space. Outside of holography this wavefront of light is only achieveable in the real world. Stereoscopic images consist of seperate left and right images that when combined give the *illusion* of depth due to various parts of your brain that gauge distance, but not depth since they are based on a 2-dimensional sampling.

It may seem that I am splitting hairs here, but I get very frustrated when people think that having one eye covered eliminates all depth perception. That is a catagorically wrong assertion since the retina in each eye occupies a three-dimensional space. People who have lost an eye encounter problems with depth preception, but do not lack the *ability* to precieve depth.

If you pay close attention to any stereoscopic image, whether it is a "magic eye" or a viewmaster you will notice that things are collected into two-dimensional sheets that appear to have depth relative to eachother. A similar situation in real life would be if everything was either a backdrop or a cardboard cutout.

By contrast the image displayed in a hologram presents an integral depth of the surface that is preceptible by a single human eye. It looks *real* becuase it is exactly the same 3-dimensional wavefront that existed when light was bouncing off the object to record the hologram.

It is all a little confusing, but a little thought and casual observation will reveal these things to you. In my case I spent three-months interning in a holograpy studio in NYC, so I got to hear many interesting discussions on this and various other strange concepts of reality.

So please peole, paralax does not mean the same thing as depth. If anything, please take that away from this thread.

Re:What's so difficult? (1)

FrenZon (65408) | more than 12 years ago | (#2978817)

They are not concerned with capturing depth data for broadcasting into 3d - they are building a system to automatically differentiate the subjects from the background to allow bluescreening and video effects to be applied in real time.

A double camera system with a computer vision system would have difficulty picking out edges of subjects, and the resulting 'bluescreening' would be bodgey, at best. This is a relatively cheap, and simple solution.

Re:What's so difficult? (1)

Ex-MislTech (557759) | more than 12 years ago | (#2978846)

I think they are going to use Holography ultimately, its just slow in coming to the mainstream .

Re:What's so difficult? (2)

evilviper (135110) | more than 12 years ago | (#2978874)

Could you possibly be more vague?

There are many ways to generate holographic images. The question is in the details. Will we see the same thing from any angle? Will a series of mirrors be used or just several lasers? How big will the picture really be?

It's just as possible in the future we'll all just strap on somethng similar to the I-Glasses [thinkgeek.com] and individualize te experience.

Re:What's so difficult? (2)

Forkenhoppen (16574) | more than 12 years ago | (#2979301)

I have an idea, but I'm not sure if it applies to this article or not.

Why are there no holographic cameras? How about a personal photographic system that could take a 2D picture, along with depth information. Couldn't the lab then use that information to extract some semi-3D models as a basis for a hologram? (You know; one of those thin colour-banded holograms they put on CDs and credit cards..?) Or is the cost of making those holograms prohibitively high..?

Re:What's so difficult? (1)

ragnarok (6947) | more than 12 years ago | (#2979511)

Why are there no holographic cameras?

Using normal holographic film you need monochromatic light to expose them (ie a laser) and exposure time is measured in seconds. Not very practical for a "camera".

Re:What's so difficult? (4, Informative)

Pemdas (33265) | more than 12 years ago | (#2979228)

The concepts behind it aren't too difficult, a google search for epipolar geometry [google.com] is a good place to start.

The biggest problems are computational; it's hard to do a good job of stereo reconstruction at high frame rates in real time. It's by no means impossible, and there are commercial out there that do it, like this one [ptgrey.com] .

Two cameras aren't really necessary, either, if your camera is moving in the scene. It's possible to recover both the movement of a camera and 3-d information about a scene just by moving a camera through it. Googling for structure from motion [google.com] is a good place to start looking into those techniques, and there's a pretty cool page about one groups application here [caltech.edu] .

In short, this company may have an interesting prodect (depending on cost and more details on the error characteristics) but this isn't something that couldn't be done with existing methods.

Also, as an aside, I find it interesting that they take a swipe at laser rangefinders as requiring a spinning mirror, when just about all IR cameras have a spinning "chopper" as an integral part of the exposure system...:)

Re:What's so difficult? (2)

Com2Kid (142006) | more than 12 years ago | (#2979795)

Nah man, that just gives you minimal depth perspective, the data format is still 2d.

It is the difference between having a 3d object and taking a front on picture of it and importing that picture into Adobe Illustrator (or Kilistrator, take your pick, now Kdraw or KVector isn't it? ) and using a "convert to paths" tool, which will get you a very nice 3d -looking- image but it will only store two dimensions for you, VS taking multiple shots of that object and importing them into an Application that calculated the 3d space of that object.

Of course the advantage of what THIS camera does is that you get some 3d information without having to do a lot of REALLY nasty interpolation between multiple images. Granted modern techniques to do such have gotten better, but artificialy creating 3D data from 2D pictures of 3D objects, well. . . . heh. Even worse if those objects are "4D" (aka moving).

This new camera seems to deal with moving objects just fine. Yah.

The MAIN thing that I am thinking of this of is that you could possably translate objects around in your 3D space that was created by this camera.

Your point of view would remain fixed and none of the objects could rotate (more on this latter) but you could still do some REALLY nice stuff in regards to Object Based Encoding.

In fact the integration of 3D data into Object Based Video Encoding technologies could work to create for some VERY nice bit rates, or at least the removal of gobs of artifacts.

Imagine if the Video Encoding KNEW that such and such person was going BEHIND that plant.

Now of course one other use for this is that if you combined it with the pre-existing methods of using multiple cameras to capture a 3d space. With this method you could, mabye even after just creating an object outline in one viewpoint, (I will have to think over this particular facet of this new technology more in order to prove or disprove that idea) to rotate all the seperate OBJECTS within the scene, and not just move your view around the scene. (This is of course excluding any partialy obscured objects, which would likely have some strange things happen to them. :) )

Because you have each objects X, Y, and Z coordinates, and your camera could have almost complete X, Y, and Z plane movements (remember, interpolated between multiple sources and your image quality when zooming in would be dependent upon your original image capture quality) you have yourself what is basicaly a fully workable 3d workspace.

Imagine importing your video some day not into Adobe Premiere but rather into Maya or 3D Studio Max.

Kick Ass.

Re:What's so difficult? (1)

3Suns (250606) | more than 12 years ago | (#2980447)

IIRC, the depth information given by that technique is not very robust. It might be able to place one object behind or in front of another, but for any kind of precision you'd probably need an aperture speed that would make both images irresolvably blurry.

Our eyes gather most of their depth information by focusing on a particular object, then doing some really cool neural-network trigonometry to measure the inward angle each eye is at to approximate the distance to the object. This could be done with cameras on extremely sensitive servos, but the information is only obtained for a single object, not everything else in the scene.

Video with depth? (-1, Flamebait)

Anonymous Coward | more than 12 years ago | (#2978788)

Open source programmers are still poor lusers anyway :)

Re:Video with depth? (-1, Offtopic)

Ex-MislTech (557759) | more than 12 years ago | (#2978848)

Go Fish...Oh...You already are ...LOL

Fun to abuse... (4, Insightful)

fleeb_fantastique (208912) | more than 12 years ago | (#2978789)

Can you imagine using this technology to insert your favorite politician in a porn video? George Bush Does Dallas.

Used within a survellance camera, it could detect motion without getting tricked by that tree near the air vent.

It could also be used in surgical situations where a specialist located in another state can more easily study facets of the video being provided to him (cutting out noise, if you will).

You could do some really weird video editing where you could create a scene of a person standing in a verdant field in the middle of summer with snow falling within his 'mask'.

Items recorded in this way (presuming the mask is also recorded) could perhaps be admissable evidence that helps the court focus on a specific action that might otherwise get missed.

It might also provide a less-expensive way to make 3-D videos. Precursor to holographic movies?

Re:Fun to abuse... (-1)

Fucky the troll (528068) | more than 12 years ago | (#2978793)

Are you saying that George Bush is your favorite politician?

Get thee to a psychiatrist.

Re:Fun to abuse... (-1, Troll)

Anonymous Coward | more than 12 years ago | (#2978824)

You liberal, you don't know shit 'cause you've never been fucked in the ass.


This ain't about justice! No, this is about order! Who rules? Fascism is coming back.

Re:Fun to abuse... (-1, Offtopic)

Anonymous Coward | more than 12 years ago | (#2978861)

God, I miss having a hard cock in my leather cheerio.

Re:Fun to abuse... (-1)

October_30th (531777) | more than 12 years ago | (#2978863)

What are you trying to say?

That conservatives "know shit" because they've all been fucked in the ass? I thought a conservative would rather die than experiment in alternative lifestyles...

Re:Fun to abuse... (1)

Prisoner Of Gravity (555440) | more than 12 years ago | (#2979013)

Actually, if you had a pulse, you'd notice that conservatives tend to be richer than liberals, and that rich people tend to have a lot more sex with more partners.

Re:Fun to abuse... (-1)

October_30th (531777) | more than 12 years ago | (#2979063)

Yeah, and the shoes you wear at a nightclub can make the differenec between you getting a blowjob or an affair with your own palm.

Forger's wonder tool (5, Interesting)

BlueUnderwear (73957) | more than 12 years ago | (#2978920)

...admissable evidence that helps the court...

IMHO, this technology would rather do the contrary. It makes photo forgeries so damn easy: no afternoon-long sessions with the gimp to get exact contours of people to delete from or insert into picutres: just use the ZCAM's distance keying and you get instant masks. The example given was scary: a business meeting, from which they could edit out people at will. The ideal tool for anybody that wants to rewrite history. So, forget about photos staying admissible as evidence in court.

Re:Forger's wonder tool (0)

Anonymous Coward | more than 12 years ago | (#2980068)

you don't understand. it would add an extra variable to say that the footage is more authentic. it would be pretty hard to doctor together a 3d scene that matched something that was unedited. (not impossible, but then again, they can doctor 2d videos today too, it's just a lot of work in either case.)

Re:Fun to abuse... (1)

dario_moreno (263767) | more than 12 years ago | (#2979387)


If I remember correctly, there were 3D prOn
flicks made during the 70's golden era
(think "Boogie Nights"). I was told
these films were pretty impressive on widescreen...maybe this technology
will also bring a revival of those
artistic explorations !

Re:Fun to abuse... (2, Insightful)

andycat (139208) | more than 12 years ago | (#2979726)

It might also provide a less-expensive way to make 3-D videos. Precursor to holographic movies?

It's a step along the way, but it's got one major drawback: it only captures a scene from one viewpoint. As soon as you move away from that viewpoint you're going to see holes in the scene where the camera didn't capture any information. To fix this, you must either (a) keep the viewpoint fixed at the camera's center of projection or (b) capture multiple views of the environment to fill in the missing bits.

Cameras like this have another potential benefit: better video compression. There's a section of the MPEG-4 standard that provides for segmenting your scene into objects so you could, say, encode the weatherman separately from the backdrop he's waving his hands at. If you shoot with a camera like this that can give you a rough silhouette of major objects in the scene, you could spend more of your time doing high-quality encoding of the people running around in the foreground and less of your time on the background that doesn't change for the length of the shot.

That said, I'm awfully skeptical about their claims of precision. As another poster has mentioned, there's a reason why laser range scanners cost so much: building an accurate rangefinder with lots of dynamic range is hard. As for object segmentation... I personally don't believe the image they provide as an example. Take a look at the depth map of the people at the conference table. In particular, look at the tabletop. It's nearly parallel to the camera axis, which means that its depth should be increasing fairly rapidly, which means you should see a gradient from light (near) to dark (far) in that part of the image -- but no, it's all one color.

I suppose you can explain that as treating everything between depths D1 and D2 as a single object, but that doesn't work all that well in practice. What's far more likely in my opinion is that that object mask is a hand-created example rather than the actual output of the device.

I used to key images... (2, Informative)

Joe 'Nova' (98613) | more than 12 years ago | (#2978792)

I didn't have a depth thingy to tell me how to replace the image, we had blue backgrounds which had to be equally lit, and pray nobody came with blue on.
The real reason blue was used is because if you see a video signal, it is only 11% of the signal, at most, and also a very rare color(saturation wise) in a picture. Most people don't wear blue tarp mascara, and it was acceptable.
The other type of keying was on an Amiga with a Gen Lock, using background color as the transparency, a static image over a live background. You could also set the transparency, so you could get ghost-like effects.
But with one of these, you can probably make a scrolling background with the occasional tree popping to front. If you were to do the same with an editing suite, you're looking at at least a good hour, and when you rent out facilities, you look for all the helpies you can. Just printing out a still from video can cost more if you're using a "video printer".
I wonder if you can set the depth manually, or if it's hard coded. It might be fun to see something pass "through" something else.

Compression? (0)

Anonymous Coward | more than 12 years ago | (#2978801)

Will that help to improve the compression algorithms?

Will the depth will be an useful data for the compression algorithms?
Yeah, ouais, Cool :):):):))

Re:Compression? (0)

Anonymous Coward | more than 12 years ago | (#2979707)

To put it simply, yes, but only in films where the data could be used to extract like objects and compress them individualy. It would not work in still images.

Re:Compression? (1)

robosmall (63822) | more than 12 years ago | (#2979939)

This depends on the goal of the compression. Do you want the compression to preserve quality or what I would call "temporal relavance".

For teleoperation of remote systems, it might make way more sense to weight the compression with respect to relative distance, something that is closer gets higher quality where something farther away gets lower quality.

This will revolutionize color keying. (5, Interesting)

arsaspe (539022) | more than 12 years ago | (#2978808)

Normally, when you want to key in a false background in a scene, you need to have a constant color in the background (Hence the use of blue and green screens). If the background isn't flat, then you either have to go at it with photoshop frame by frame, or use expensive border tracking software which is less than perfect. You could spend hours setting up a scene just right, with screens placed in all the right places, making sure that there is nothing else that is the same color as the key, and planning camera angles for an action sequence, not to mention the struggle of getting the keying to work just right.

with this new technology, however, you could film an actor just about anywhere with very little preperation, and key him/her out based on depth AND color (some situations may need both), and easily pop new things both in front and behind the actor. It could save movie studios a lot of time, effort, and money for doing special effects, especially after you consider how easily it would be to generate a virtual stunt double from the 3d mesh (film the actor from a few angles, and merge the resulting 3d wireframe. Voila, perfect model down to the wrinkles in the skin)

Re:This will revolutionize color keying. (-1, Offtopic)

Anonymous Coward | more than 12 years ago | (#2978923)

your name sounds like "ass rape" how do you feel about that?

Re:This will revolutionize color keying. (0, Offtopic)

arsaspe (539022) | more than 12 years ago | (#2978950)

I was very bored when trying to make up a slashdot username. I got the words ass and rape, and combined them (Donkeys are sexy, you know!)

Re:This will revolutionize color keying. (3, Informative)

edo-01 (241933) | more than 12 years ago | (#2979288)

especially after you consider how easily it would be to generate a virtual stunt double from the 3d mesh (film the actor from a few angles, and merge the resulting 3d wireframe. Voila, perfect model down to the wrinkles in the skin)

Uh, no... I wish it were that easy - but scanned 3D meshes of that quality are still in the domain of laser scanning. There's just so much detail that even the best scanners can't pick up, major wrinkles and folds yes but pores and fine lines have to be simulated with displacement/bump and colour maps derived from the scan data (basically as it scans, the device takes a big long photo of the object to wrap around it later). Once you have the point-cloud from the scan (raw data) there is a LOT of cleaning up to do to get a parametric mesh with correct UVs (texture mapping co-ordinates) for use in production.

For more info, check these guys [headus.com.au] out - we've used em recently on a couple of film and tv projects and their output is damn nice, but the price tag reflects the complexity and difficulty of the task.

Re:This will revolutionize color keying. (1)

edo-01 (241933) | more than 12 years ago | (#2979302)

damn. I hit 'post' then I think of something else to say... the depth info may not be good enough to generate a nice detailed mesh of an actor or a set etc, but lets say it can give at least a coarse mesh for each frame of a shot, you could use the low res 3D info for things like shadow passes for CG elements (you comp in a flying robot going past your real actor, you use the depthcam generated mesh info for those frames to have the robot's shadow slide over the actor's body correctly. If the actor's costume had reflective surfaces like goggles you can use the 3D info to have the robot reflected in them - it may sound subtle but it's the subtle things that tie a shot together)

Interesting thought .. (0)

uq1 (59540) | more than 12 years ago | (#2978809)

It'd be great if they could apply this sort of technology to previously recorded works of art like any movie with Ron Jeremy or John Holmes in it.

I'd certainly like to feel like I'm there in the action when Ron's banging some latino chick in the ass and she's begging for more!

Re:Interesting thought .. (1)

BCoates (512464) | more than 12 years ago | (#2978926)

There's at least one short (the scene, of course) John Holmes scene in 3D floating out there, in the awful 3D porn "Lollipop Girls in Hard Candy".

I doubt anyone will believe this, but it is the only porn movie I have ever gone to see, I swear I'm not part of the dirty raincoat brigade. :)

--
Benjamin Coates

Re:Interesting thought .. (0)

Anonymous Coward | more than 12 years ago | (#2979721)

The data that drives this technoligy is not avalible in flims that have been already recorded. It is not possible to use _this_ sort of technoligy to supplant you into a prono.

used to do this with 3d studio (2)

DrSkwid (118965) | more than 12 years ago | (#2978815)

3dstudio 4 has a plugin to render z buffer depth too to get scenes like the one's with this camera

it's great for doing depth based effects such as artificial depth of field (3ds4 didn't have that)

I'd love to have one of the cameras available for making live video stuff, I'm looking forward to getting my hands on one, I hope my local video facilities unit gets one (I'm going to mail them a link).

Coming soon to an MTV near you. Sadly probably not from my studio any more. I gave that up when 3dsMax came out, Seemed like there was no room left for a two man outfit (one gfx, one coder).

What this gives you (1)

molekyl (152112) | more than 12 years ago | (#2978820)

Video recorded with this technology will give you two video streams:

* The normal video-stream that any video-camera will give you.

* Another video-stream containing depth information.

So, what you have, at best, is a way to tell the relative distance from the camera to each point in the image. Which, will let you adress seperate elements of the image based on depth. But, you _won't_ have anything more image-wise than you can record at home with your Sony.

Sorry, no 3D-porn.

Re:What this gives you (1)

BCoates (512464) | more than 12 years ago | (#2978915)

Sorry, no 3D-porn.

Couldn't you take the image streams, do the red/blue-shift thing based on the depth stream (or better yet, disneyland-style polarization, if you've got the playback gear), and there you go, 3D-porn :)

--
Benjamin Coates

touchable porn (0)

Anonymous Coward | more than 12 years ago | (#2978918)

You have a z-stream of data. When you are moving around this in 2d, the amount of effort is dependent on the gradient where you are going. I would think someone can come up with a glove attachment connected to 3D force feedback actuators that would let you touch the 3D models or porn.

Re:What this gives you (2, Interesting)

StarBar (549337) | more than 12 years ago | (#2978973)

Depth information and movement can give a chance to triangulate objects targeted.
From there you probably can move on to the more sophisticated compression techniques
(soon to be) intruduced my MPEG-4.

Ever seen the move "Enemy of the state" where they triangulate 3D shapes with satellites
and movements? Great techniques in that movie, but scary scenario.

Re:What this gives you (2)

buckrogers (136562) | more than 12 years ago | (#2979175)

OK, I find compression interesting... I think that you could use this to compress a 3D video stream, by essentially "seeing" each object as a seperate stream of data in the image and compressing each seperately.

You might be able to actually generate a 360 degree view of the background and encode the distance and angle of the view in each scene, then place the seperate actors into the scene.

The really cool thing about this technique is that it would make it easy to delete or replace any one object in a scene in a video.

Re:What this gives you (1)

StarBar (549337) | more than 12 years ago | (#2979368)

I think what you describe is very close to what they are trying to do with MPEG-4 animation extensions. Doing this with live content is very exciting. A movie will not be a series of pictures but rather a scenario with a number of pre defined view paths which the viewer can choose between. The same goes for fully animated movies like "Final Fantasy" and "Toy Story". In either case the size of what is broadcasted shrinks dramatically.

The downside: (5, Funny)

TheFlu (213162) | more than 12 years ago | (#2978849)

"Once you capture live action footage in object video format, you can not only make it more visually engaging, but also sell advertising right in context of the live event."

Great, now you won't be able to distinguish between the show you're watching and the advertisement. Now when I'm watching TechTV, I can look forward to Britney Spears bouncing thru with a Pepsi at 30 second intervals.

Re:The downside: (0)

Anonymous Coward | more than 12 years ago | (#2980847)

Britney Spears bouncing is a bad thing?

;)

TOUCHDOWN! (-1)

medicthree (125112) | more than 12 years ago | (#2978853)

180

Wow... (1)

t_allardyce (48447) | more than 12 years ago | (#2978904)

with this technology, NASA could fake the moon/mars landings again, and this time - get it right! rofl

You could do lots of interesting tricks with this - like changing the cut-off on the z-buffer, so when someone walks away from the camera, it looks like they're walking through a wall.

Re:Wow... (1)

Sarcazmo (555312) | more than 12 years ago | (#2978928)

I bet this time they won't hire OJ Simpson to fake the Mars Landing.

(MODS- It's an obscure joke, just because you don't get it doesn't mean it's offtopic)

The new reaches of minaturization (2)

PhotoGuy (189467) | more than 12 years ago | (#2978906)

Just amazing how DV cameras just keep getting smaller and smaller. I think I'll pick up that ZCAM, and get the optional belt case, so it's with me everywhere I go :-)

I guess this thing is targeted more for reporters and the media, than the consumer.

I assume "keying" is what we dumb consumers typically know as "blue screening" or "green screening", but this lets you do the same without a solid background, since it can separate out the people in the foreground using a depth cutoff instead.

Neat technology. I think there'll be more practical uses for this than you might think at first.

I wonder how accurately the z layer aligns with the pixels. Since it's a different infrared source, bounced off the subjects, I wonder if there's some fancy alignment that has to be done, or if the same pixels on the camera pick up the depth information. It'd be the difference between perfect alignment, and having sloppy edges around objects, which is pretty significant for a lot of uses.

-me

Re:The new reaches of minaturization (1)

Screwtape (9319) | more than 12 years ago | (#2978948)

Just amazing how DV cameras just keep getting smaller and smaller. I think I'll pick up that ZCAM, and get the optional belt case, so it's with me everywhere I go :-)

The "video-camera-in-a-match-head" phenomenon is pretty much exclusive to consumer gear. A good professional video camera should be at least two feet long. :)

I assume "keying" is what we dumb consumers typically know as "blue screening" or "green screening"

No, what you know as "blue screening" is technically known as "system crash". :)

The real technical terms are "Chroma-key" (make pixels with a certain colour transparent) and "Luma-key" (make pixels with a certain brightness transparent) Most guy-in-front-of-unusual-situation stuff (like, say, a weatherman) is done with chromakey.

Re:The new reaches of minaturization (1)

dickens (31040) | more than 12 years ago | (#2979224)

Pros *are* using tiny cameras, and mics as well.

There was an article in EQ magazine about working the X-games in Philly. Helmet-cams and matchbook-size "button mics" abound.

It doesn't appear to be on their site [eqmag.com]

Not mentinoned in the article, but another cool thing they have at the X-games is a camera suspended above the half-pipe by three (four?) cables connected to computer-controlled high-speed winches. It can "fly" over the skaters' heads at amazing speed, dropping down into the pipe to look up at them on the vert ramp, and then zooming up out of the way.

Blue screen? (1)

yerricde (125198) | more than 12 years ago | (#2979479)

I assume "keying" is what we dumb consumers typically know as "blue screening" or "green screening"

That is, unless they use Windows 9x regularly ;-)

but this lets you do the same without a solid background

Actually, use of a still (non-solid) background would help even this technique, as post-processing can massage the background vs. foreground using traditional MPEG motion compensation for an even more accurate contrast between a background moving in one direction and a subject moving in the other.

Linux! (-1, Troll)

Anonymous Coward | more than 12 years ago | (#2978907)

The penguin is so stupid, he can't even handle a simple device like a winmodem! But the so 'evil' windows can handle it with ease! Linux is crap! the only good thing is you can control your swimming pool (if you wan't to drown that is). And you get goatsex [goatse.cx] with the XXX windowing system!

Still? (2, Interesting)

BCoates (512464) | more than 12 years ago | (#2978944)

Would it be possible to economically do this with still cameras(preferrably film vs. digital)? Are there already products that do that? It would be cool to be able to record a depth 'image' with my photograps for later editing...

--
Benjamin Coates

Re:Still? (1)

foldedspace (463615) | more than 12 years ago | (#2979847)

Super idea! It would make editing in Photoshop or GIMP super fun. They might have to chnage the programs to take advantage of the new feature. Maybe a new file format too?

Read and Blue 3D (0)

dmomo (256005) | more than 12 years ago | (#2978981)

Another fun use for this technology would be to create video that works with red and blue 3D glasses, that would allow the viewer to watch in 3D or "Normal" mode.

You can create a red and a blue version of the image where pixels are cloned into red and blue counterparts. The pixesls that represent closer points are farther apart, while pixels representing deeper points are paired closer together. This would make for more realistic 3D Images without the need for filming with a special double-lens camera. Without this information, the best you can do is manually Z-buffer different sections. The depth in this case is not dynamic, and objects would normally appear to lie on artificial incremental planes. For instance, an entire human face might be on one plane whereas with this technology, (depending on the sesitivity of the depth info) the nose will be 3-dimensional relative to the person. This is not a specifically useful application, but cool in any case.

Re:Read and Blue 3D (1)

YeeHarr (187241) | more than 12 years ago | (#2981526)

What you described was demonstrated with this tech at NAB in 2000. In the demo they had the camera with the audience wearing polarized glasses (which I kept).

the technique is pretty old (2)

markj02 (544487) | more than 12 years ago | (#2978986)

Getting real-time depth information from the amount of IR reflected from a pulsed IR light is a pretty old technique. It's used in some input devices to detect where people are in front of the computer. The use of this information for video keying may be new, though.

Re:the technique is pretty old (2)

i_am_nitrogen (524475) | more than 12 years ago | (#2980905)

The technique is old, but doing it per-pixel is very cool. Now all that needs to be done is to write a 5 channel video format (RGBAZ) and I can start writing software that uses this for unrealistic things. Ohh, the possibilities...

Will Make For Some Pretty Cool Effects (1)

Taliesan999 (305690) | more than 12 years ago | (#2978989)

Sounds like an interesting technology that'll make for some pretty cool effects/uses.

Apart from the obvious use getting virtual objects to pass correctly between/around objects in the real scene, or vice versa you've freed up the colour channel info being used as a depth key for other things.

Imagine keying an actors and his or her clothing in blue and using the depth keying to to replace the blue with a projected texture or somesuch using the depth information to do the texture calculations, or keying sports equipment in sports broadcasts.

Or if the technology eventually scales down to an affordable level it might make an interesting input device for playing video games.

I've already got realtime keying w/o color bg... (3, Funny)

undertoad (104182) | more than 12 years ago | (#2979026)

it's called a dumb terminal.

Thank you.

Remote sensing? (2)

Boiling_point_ (443831) | more than 12 years ago | (#2979029)

The use of this camera technology for video composition is great, but if you bundle a panoramic (360 degree) camera with it, you solve the reason that accurate 3D visual reconstructions are expensive. I'm thinking: export a 3D map of every object in range, then feed that into CAD.

Now take your CAD file, recompile and render with a Quake3 engine, apply sampled textures, and you've got a very cheap, fast, good 3D walkthrough - architects will enjoy this too, as will tourism sites.

It's also going to mean some great first-person-shooter maps :P

Re:Remote sensing? (0)

Anonymous Coward | more than 12 years ago | (#2979074)

I can't see this being used in combination with true CAD/CAM. Reason being that I can't see this having a good enough resolution...the further away you get, the worse the resolution becomes. For CAD/CAm applications, get the laserscanner found at www.somatech.nl which can attain a resolution of microns (it doesn't say that in the blurb on the webpage, but I worked there, and and in practice it will get that kind of resolution in the pointcloud).

Re:Remote sensing? (1)

pauldy (100083) | more than 12 years ago | (#2979204)

Or you could just use the piece that makes most of those 360 degree cams work. That nice little bowl shapped mirror atachment that goes just in front of the lens. the little convex mirror will produce an excelent 360 degree panorama.

Re:Remote sensing? (1)

robosmall (63822) | more than 12 years ago | (#2979292)

Yep! Great idea. Million dollar one too, which means that it is hard.

The camera produces a 3D point cloud, from which geometry (CAD) does not fall out of naturally.

Visual Effects work (2, Interesting)

edo-01 (241933) | more than 12 years ago | (#2979188)

I posted a comment [slashdot.org] a while ago that explained the uses in visual effects work for depth-cameras, and some of the problems with existing methods of pulling a matte off of live action plates...

We were actually talking about this at work the other day; mainly wondering how well it would deal with things like fine hair, smoke, transparent objects and stuff like film grain/video artifacts/lens artifacts etc...

Would love to try one and find out...

ZCAM has been around for quite a while now (1)

Qbertino (265505) | more than 12 years ago | (#2979270)

The ZCAM Videocam extension is available for more than half a year now.
That fact that it actually works as advertised is somewhat astonishing. If there's a large enough distance between fore and background (> 1,5 Meters) it Keys without any hassle. No more Blue or Greenscreens, that means.

Hair? Glass? (2, Informative)

Anonymous Coward | more than 12 years ago | (#2979331)

The biggest problems in color keying are Hair and glass (as in eyeglasses).

If this system, as it claims is simply making a z-buffer (depth buffer) of the image, then it's going to see hair and glass as a opaque lump, not the semi-transparent reality.

Blue and Green screening (not chroma keying) can do a very good job of pulling out variable opacity and thin items like hair. Especially with the newer LED screen illumination camera rings.

This technology has some nifty tricks and will allow more poor quality keying to continue, but it won't replace blue and green screens.

Re:Hair? Glass? (0)

Anonymous Coward | more than 12 years ago | (#2979653)

Good point. I was just looking at the article and the glasses of water on the table in the example.
They look too good to be true.

slow thinkers this is for object recognition! (0)

ballzhey (321167) | more than 12 years ago | (#2979477)

knowing the depth of something is one more feature we can use in a recognition by components model of the perceptual process. Introduce a context for top-down processing in conjunction with this empirical bottom up camera and computers could recognize all sorts of things.

ZCAm is way old (0)

Anonymous Coward | more than 12 years ago | (#2979622)

Hi Folks,
I was rather astonished to read about ZCAM today, as it is rather old.
I saw it working at last years NAB in Vegas.
It still is a little bit rough and the keyer is not performing as well as others do, but it is an interesting technology, esp. for local news stations or variety shows...

This is huge for MPEG4 (5, Informative)

William Tanksley (1752) | more than 12 years ago | (#2979624)

I don't believe nobody has posted about MPEG4. This is very interesting for that -- film using this, and you can encode into MPEG4 format with /huge/ compression almost automatically. The hard part about MPEG4 is object detection; this makes that almost free.

-Billy

Goes Beyond MPEG4 Codec (2)

cryptochrome (303529) | more than 12 years ago | (#2980007)

You're absolutely right - this will make a huge difference for compressed video by separating out the layers of the image. Motion prediction (or rather background prediction) will become trivial. The potential for this goes well beyond the existing MPEG4 codecs - indeed I expect it to spawn a whole new generation of codecs based on RGBD colorspace. Not only that, it will allow you to easily build up a detailed 3 dimensional representation of the static objects in your video, which is a whole new technological potential.

Stereoscopic video? (2)

Bowie J. Poag (16898) | more than 12 years ago | (#2980070)



Why bother. A vertical split-screen image for left and right eye is all you need. Theres nothing stopping conventional television from broadcasting stereoscopic images. Get two camcorders, tape em together at the sides and videotape stuff in your house. Edit the video so that the left camera's image displays on the right-hand side of the screen, and vice versa. Bingo, 3D video.

See what I mean? [ibiblio.org]

Cheers,

Re:Stereoscopic video? (2)

PhotoGuy (189467) | more than 12 years ago | (#2980162)

Why bother. A vertical split-screen image for left and right eye is all you need. Theres nothing stopping conventional television from broadcasting stereoscopic images. Get two camcorders, tape em together at the sides and videotape stuff in your house. Edit the video so that the left camera's image displays on the right-hand side of the screen, and vice versa. Bingo, 3D video.
But this isn't just about presenting the 3d effect of video (in fact, they don't talk about that at all, from what I can see. It's more useful for clipping your live video images in real time to different depths (only keep the people up close, ignore the background).

In *theory* you could do this with two cameras, and some amazing processing that compares the two images, extracting the depth information for each pixel. But if such software even exists (and I think it might, for leading edge 3D scanning techniques), there's no way it could be done in real time, like the ZCAM does.

-me

Getting Closer? (0)

Anonymous Coward | more than 12 years ago | (#2981151)

I'm still waiting for those cool full-body suits with 3D goggles that let you have virtual sex. Who cares about the 3D, bring on the 3M!

Not new technology - saw this at NAB in 2000 (1)

YeeHarr (187241) | more than 12 years ago | (#2981539)

NAB is the National Association of Broadcasters conference. The ZCAM was was being demoed then.

In the demos they had realtime keying so they could fly a 3D CGI character in front of and behind the live talent. There was only about a 40ms delay. This is impossible with normal keying (ie blue/green screen). (You can only put stuff behind the talent).

It's biggest limitation was the resolution of the 3D sensor was low - so you had rough edges (think jaggies).

They also demonstrated a 3D Realplayer and 3D Windows Media players (which you watched with stereo shutter glasses). These players were called 'deep players'. Pretty cool but definitely not new.

Here's the Patent (1)

sane? (179855) | more than 12 years ago | (#2981658)

http://patft.uspto.gov/netacgi/nph-Parser?Sect1=PT O2&Sect2=HITOFF&p=1&u=/netahtml/search-bool.html&r =3&f=G&l=50&co1=AND&d=ft00&s1=3DV.ASNM.&OS=AN/3DV& RS=AN/3DV

Looks rather simple, akin to simple range gating.

Load More Comments
Slashdot Login

Need an Account?

Forgot your password?