Beta

Slashdot: News for Nerds

×

Welcome to the Slashdot Beta site -- learn more here. Use the link in the footer or click here to return to the Classic version of Slashdot.

Thank you!

Before you choose to head back to the Classic look of the site, we'd appreciate it if you share your thoughts on the Beta; your feedback is what drives our ongoing development.

Beta is different and we value you taking the time to try it out. Please take a look at the changes we've made in Beta and  learn more about it. Thanks for reading, and for making the site better!

Ask Slashdot: Tips On 2D To Stereo 3D Conversion?

timothy posted more than 2 years ago | from the aggghhh-my-eyes-my-eyes dept.

Graphics 125

An anonymous reader writes "I'm interested in converting 2D video to Stereoscopic 3D video — the Red/Cyan Anaglyph type in particular (to ensure compatibility with cardboard Anaglyph glasses). Here's my questions: Which software(s) or algorithms can currently do this, and do it well? Also, are there any 3D TVs on the market that have a high quality 2D-to-3D realtime conversion function in them? And finally, if I were to try and roll my own 2D-to-3D conversion algorithm, where should I start? Which books, websites, blogs or papers should I look at?" I'd never even thought about this as a possibility; now I see there are some tutorials available; if you've done it, though, what sort of results did you get? And any tips for those using Linux?

cancel ×

125 comments

Here's a tip (4, Insightful)

Hatta (162192) | more than 2 years ago | (#38805807)

Don't do it.

Re:Here's a tip (3, Insightful)

spidercoz (947220) | more than 2 years ago | (#38805885)

Exactly. People with the proper equipment and money fail at this regularly.

Re:Here's a tip (4, Insightful)

vlm (69642) | more than 2 years ago | (#38806177)

Exactly. People with the proper equipment and money fail at this regularly.

I think it is important to arrive at the correct mindset. This has never stopped people from snapping pix at weddings and sporting events and tourist traps, even if their pix look like garbage compared to a pro photo on a postcard or whatever.

If you want to do it for fun, heck yes go for it. Go Go Go. You don't need help just try it.

If you think you'll turn out something that means anything to anyone else in the world, you'll probably be disappointed. Insert stereotype of goans when someone wants to show you old fashioned slides of their vacation. Although that old tech is getting kind of retro cool now.

Re:Here's a tip (1)

Grishnakh (216268) | more than 2 years ago | (#38808549)

Hollywood has released big-budget movies costing tens of millions of dollars to produce, using 2D-to-3D conversion, and the results have been terrible. Hollywood may suck at coming up with original storylines and good plots, but their skills at technical effects are unequaled anywhere on earth; if they can't do it, no one else can either. The whole thing is a bad idea. If you really want 3D imagery, you need a 3D camera.

Let me elaborate on that (0)

3.1415926535 (243140) | more than 2 years ago | (#38805899)

Don't do it.

Really, just don't do it, please!

Re:Here's a tip (0)

Anonymous Coward | more than 2 years ago | (#38805903)

I'd second this. I hope that 3D is just a gimmick that falls out of style rather quickly.

Re:Here's a tip (2)

ciderbrew (1860166) | more than 2 years ago | (#38806123)

Unless you only have one eye. I'm sorry to tell you ...

Re:Here's a tip (1)

Pieroxy (222434) | more than 2 years ago | (#38806343)

I'd second this. I hope that 3D is just a gimmick that falls out of style rather quickly.

Most certainly not. Converting 2D footage to 3D is a horrendous endeavor and should be stopped - or at least left alone. But well-done stereoscopic footage has an added value IMO.

Now, they've improved resolution and stereoscopic aspect. What I'd like to see next is the framerate. I find 24fps vastly insufficient to relay the feeling that "you're there", and whenever there's a big tracking shot, I find it choppy at best.

I've watched 100fps footage and it does make a heck of a difference.

Now, there's plenty of automatic algorithms that already improve this in popular videoprojectors and TV sets, but I haven't experienced it first hand so I can't vouch for it.

Re:Here's a tip (4, Interesting)

gfxguy (98788) | more than 2 years ago | (#38806939)

You're in luck... Peter Jackson is pushing 48fps over 24 in the cinemas, stating enough digital projectors are capable. He's shooting the Hobbit at 48fps, and shooting it in 3D from the get-go. I'm more interested in the content of the movie, but I'm expecting it'll be one of the best, if not THE best, attempts at 3D so far (Jackson Explains "Hobbit" 48FPS Shooting [goo.gl] ).

He's trying to encourage future film productions to step up to 48, too.

Re:Here's a tip (1)

Pieroxy (222434) | more than 2 years ago | (#38807277)

Wow, thanks for the tip. I didn't know it was that close.

Re:Here's a tip (2)

Jarik C-Bol (894741) | more than 2 years ago | (#38808629)

did you know they storyboarded the hobbit in 3D? saw a bit about it, they had two storyboard artists, sitting side by side, one drew in cyan, the other in red, and they both had glasses available, so the resulting storyboards where in 3D. really kind of an amazing co-op effort on the artists part. (saw a thing about it on the production vlogs, had to break out my cyan/red sunglasses for that episode)

Re:Here's a tip (1)

mrchaotica (681592) | more than 2 years ago | (#38808867)

Sucks for the folks who bought 120Hz TVs in an attempt to eliminate telecine judder; now they'll have to upgrade to 240Hz.

Re:Here's a tip (0)

jdgeorge (18767) | more than 2 years ago | (#38809553)

Most significant comment I've seen so far. I'm happy to do without 3D TV. I'm not happy with the idea of buying a TV with a higher refresh rate to eliminate this annoying effect.

Hopefully the industry will settle on 60fps, instead of forcing yet another round up TV purchases.

Or... frightful thought.... is this another standards war brewing, in the style of Blu-ray versus HD-DVD? Where are the content producers other than Peter Jackson on this issue?

Re:Here's a tip (1)

bill_mcgonigle (4333) | more than 2 years ago | (#38810091)

Sucks for the folks who bought 120Hz TVs in an attempt to eliminate telecine judder; now they'll have to upgrade to 240Hz.

It's OK, TV's today don't usually last more than 5 years. Which is fine - they're quite nice and dirt cheap for their size and price.

Re:Here's a tip (1)

Gr8Apes (679165) | more than 2 years ago | (#38810633)

120 was insufficient, I got a 240Hz. Guess it was a good idea to hold out.

Re:Here's a tip (1)

Tridus (79566) | more than 2 years ago | (#38805957)

I'm not sure if I should mod this as offtopic or insightful, because it's somehow both at the same time.

Re:Here's a tip (1)

Aladrin (926209) | more than 2 years ago | (#38806057)

No, seriously. I absolutely love 3D. There isn't a bigger 3D fanboy in the world.

And I reiterate: Don't do it.

2D converted to 3D is what is wrong with the current 3D. It's absolute garbage. It makes me want to pull my hair out.

Re:Here's a tip (1)

gl4ss (559668) | more than 2 years ago | (#38807515)

well, it can be done. if you have an hour of time per 4 secs of video where you decide that some 3d effect works and is worthwhile.

some scenes you can do pretty ok automatic detection of what is background and what's foreground(by motion). but it'll still be more like a puppet show than what real 3d should/would be.

Re:Here's a tip (4, Insightful)

Jappus (1177563) | more than 2 years ago | (#38809055)

Don't do it.

Well, there is a way to do it, a very elegant way even. One that can be, for all purposes and intents, as good as you can get with the raw material; even to the point where the average human will not be able to tell the difference.

The thing is: That solution has a big catch. How big? Well, to put it mildly, you will most likely win the Turing Award in the process of doing so and will at some point end up with a Nobel Prize in your hand, too. As you can imagine, the solution is: Artificial Intelligence; and if you want to really do it, only strong artificial intelligence will do.

The fact is, as others have quite succinctly pointed out, that the issue is in determining what is "in front" and what is "in the background" on top of how far away everything is. This is, quite simply, impossible to do right if you approach it as a purely algorithmic picture-to-picture problem. There is just not enough information inside the frames/movie to do it well enough even at the best of times.

So, what do you do? Easy, you import external information. Things like: "This is a tree; That is a human. A tree is bigger than a human. Both take up the same space in the picture. Assumption: The human is closer than the tree. Proof: The tree casts a shadow on the human and the only light source is behind the tree. Angles point to a distance of 20 meters between human and tree. Etc. pp."

This line of reasoning imports lots of information from the outside; essential things like "What does a tree/human look like?", and "What are their relations to each other size-wise?". But if you grant that this information can be derived and used by an AI, the result can be a very precise derivation of the distances between objects.

It is exactly the same line of reasoning the human brain uses for large distances (where the parallax of your eyes is too small, focus is unimportant and difference between eye positions negligible), or when you have lost vision in one eye (or just plainly covered it). Even though your brain suddenly has only half the information, it is capable of giving you a good feeling for distance and depth.

Of course, it doesn't always work, as far too many optical illusions like the Ames-Room show, but it works significantly better than a "pure" picture-to-picture approach and is the sole reason why almost everyone here feels that 2D-3D conversions are so horrible:

Their brain tells them, that what they see just can't be correct, even if their eyes have actually seen it.

But of course, just using 2 cameras is much simpler. So good luck with (strong) AI. I would be surprised if you solved this issue all by yourself. :)

Re:Here's a tip (1)

Hatta (162192) | more than 2 years ago | (#38809535)

No, if you do it that way you just end up with flat cardboard scenes with a bit of depth. If I can't observe parallax, say a slight rotation of someone's head, when I switch eyes, it's not real 3d.

Re:Here's a tip (1)

EdZ (755139) | more than 2 years ago | (#38810173)

To elaborate: imagine that you were asking for a magical algorithm to automatically colourise an arbitrary monochrome video. That's peanuts compared to adding accurate and, more importantly, pleasant looking stereo data to a 2D video, while creating said stereo data from thin air.

Zoom and enhance! (0)

Anonymous Coward | more than 2 years ago | (#38805859)

Another example of information simply not being there...

Looks pretty bad IMO...

Re:Zoom and enhance! (0)

Anonymous Coward | more than 2 years ago | (#38806053)

I saw the script. it pops up moving target using the movement information out of the codec, assigning an arbitrary 'plane' to each moving subject, depth depending on their speed.

exceptionally bad. expect stuff getting popped in and out, resetting at every keyframe and just being bad all around.

new generation, new suckers for '3d' (2, Insightful)

TheGratefulNet (143330) | more than 2 years ago | (#38805873)

we all were suckered. we tried it, hated it and moved on.

each time they try to re-invent this, its still just an effects gimmick.

you'll soon grow bored.

don't invest anything in this. its a reocurring cash grab due to industry boredom.

and as a fulltime glasses wearer, I'd never be caught dead with cardboard glasses over my regular ones. an absurd concept if there ever was one.

Re:new generation, new suckers for '3d' (2)

spidercoz (947220) | more than 2 years ago | (#38806009)

and as a fulltime glasses wearer, I'd never be caught dead with cardboard glasses over my regular ones. an absurd concept if there ever was one.

Word. Pay twice as much to wear ANOTHER pair of glasses and watch something that will more than likely give me a headache? Where do I sign up!

Until they have fully immersive holography, count me out.

Re:new generation, new suckers for '3d' (1)

AngryDeuce (2205124) | more than 2 years ago | (#38806345)

Until they have fully immersive holography, count me out.

Ditto. I'm typically the sour grapes guy of the group that always resists seeing the newest 3-D blockbusters at the theater because I can't stand it, it looks like overly dim, out of focus crap and 9 times out of 10 I leave with a headache to boot.

That being said, there is only one film that I want to see in 3-D, and that is the Hobbit films. I've resigned myself to the headache, but in seeing the behind the scenes footage of how they're shooting the film, [youtube.com] I'm very interested to see how it compares to the typical 3-D crap that Hollywood puts out...

Plus I just fucking love The Hobbit, so there's that...

Re:new generation, new suckers for '3d' (1)

TheRealMindChild (743925) | more than 2 years ago | (#38806145)

I don't know. During Cyber Monday I was able to pick up a 3d television for the same price I was willing to pay for a 120hz 1080p television, so I did. I got a kickback on the glasses, so, while I know that can be a real expense, I didn't suffer from it.

Sure, it's a gimmick. But the saturday night movie with my kids has a whole new level of "excitement" for them, especially when they have sleep over guests. That alone is well worth it. But my Nvidia GTX on my PC works with it, and playing Left4Dead 2 or Batman: Arkham Asylum on it is bloody fantastic. That takes the value over the top.

On a side note, this TV (as well as my 3d DVD player) do 2d->3d conversion on the fly. It works *ok*. It seems its decision is based upon movement. The more an object moves, the more it jumps out. It is logical, but you can imagine where that doesn't work.

Re:new generation, new suckers for '3d' (2)

LiquidLink57 (1864484) | more than 2 years ago | (#38806687)

It seems its decision is based upon movement. The more an object moves, the more it jumps out. It is logical, but you can imagine where that doesn't work.

That's what comes from basing their 3D algorithm on T-Rex vision.

Re:new generation, new suckers for '3d' (1)

Hentes (2461350) | more than 2 years ago | (#38806213)

You could just buy another pair of glasses with coloured glass. More classy than cardboard anyway.

Re:new generation, new suckers for '3d' (1)

gfxguy (98788) | more than 2 years ago | (#38807111)

When I wore glasses (had lasik) and I did research in 3D (like 20 years ago) I actually had a pair of flip ups that attached to my regular glasses.

Not that I'd recommend that for anybody that didn't want to look like a complete dork, but it was convenient for me at the time.

Re:new generation, new suckers for '3d' (1)

DreadPiratePizz (803402) | more than 2 years ago | (#38806713)

Currently, 3D is being used as a gimmick. However, like depth of field or lens focal length, it's an aspect of photography that can certainly be sued to enhance your storytelling. There is an intimacy about seeing characters in 3D, which can make dialogue scenes more real. Also imagine a creepy character like Hannibal Lechter stepping off the screen and into the theatre with you to invade your personal space. That would make him even more creepy. If 3D sticks around, and people shoot exclusively for it (no 2D version later), then I do believe film makers will begin to use it in better ways. I simply don't know if that will ever happen economically though.

Re:new generation, new suckers for '3d' (1)

TheGratefulNet (143330) | more than 2 years ago | (#38810347)

you're wrong. a movie is mostly in the mind (and pure audio; ie, your cd collection, is even more so like that). imagination and the brain's ability to connect dots and extrapolate does more for the entertainment than your tech tricks.

I'm an audio guy and spend a lot on design an implementation but I realize that when a tune comes on the radio, my mind gets the same memory enjoyment from it as if it was on a high end system. the content and my extrapolation of it (on poor playback equip) completes the experience.

3d is bullshit and more a distraction than anything else. if the story cannot keep you, nothing really is going to save it. not extra bit-perfect audio, not 300 speakers aimed at you, not a billion pixels of res. the story is the main thing, the brain comes next and all else follows below that.

kind of sad that tech companies are winning in convincing people to keep re-buying perfectly good gear.

Re:new generation, new suckers for '3d' (0)

Anonymous Coward | more than 2 years ago | (#38806891)

Definitely agree for TV's and movies. Just bought a new 42" LCD and skipped the 3D model and chose for the networked version instead.

However, I would add that the Nintendo 3DS is a little nicer than just an effects gimmick. Super Mario 3D land and Mario Kart 7 are quite neat playing in 3D. Also, the 3D Dead or Alive game (although a little boring) is a bit fun to watch the ladies fight in 3D ;-). I mainly like 3D games with 3D ON compared to the game with 3D off. However, the two main drawbacks I have found are:

1) off-angle gets blurry. Wasn't an issue with the new Super Mario 3D land but it was for Mario Kart 7. MK7 has a little more frenzied action and I often tilt the 3DS in reaction to some of the action. When you do this the 3D gets blurry or there is ghosting.

2) Most new titles (the good ones) are 3D versions (like the two above). They don't work in the older DS's. The 3DS effectively obsoleted the older models. The other DS upgrades didn't do that.

I haven't had headaches yet after long gaming sessions since I keep the 3D slider at half way.

Real time... (0)

Anonymous Coward | more than 2 years ago | (#38805909)

... seems unlikely. You need to solve the correspondence problem for every frame, which is time consuming.

Ask Slashdot: Tips on Penis To Vagina Conversion? (-1)

Anonymous Coward | more than 2 years ago | (#38805947)

Anonymous Cowards writes: "I have a dumb idea and I lack the capability to research it myself. Please call me a dipshit. Something about Linux and FREE software even though it's irrelevant."

Oblig XKCD (0, Offtopic)

Anonymous Coward | more than 2 years ago | (#38805981)

This sounds familliar (4, Funny)

localman57 (1340533) | more than 2 years ago | (#38805989)

I'm interested in converting 2D video to Stereoscopic 3D video

George Lucas, is that you?

Special FX (4, Informative)

damaki (997243) | more than 2 years ago | (#38805999)

A friend of mine used to work for a French special effects company and he had to work on this. He told me that this is basically a world of pain and it produces great piles of smocking shit. It just sucks, even when done properly by highly trained people. Can you imagine making 3D out of a 2D tree? Make every background 3D or properly cut out the character to get the desired effect?
It sucks, it's mostly manual, get over it.

it's impossible (0)

Anonymous Coward | more than 2 years ago | (#38806067)

With a 2d image it's impossible to get a 3d image. You need at least two 2d images to do this and some work to calc distances from.

Well, it's possible to do photoshop things and put third dimension's information to a image dividing into layers and give a distance to each (trully hollywood hi-tech to sell poor-3d films at low cost), but the 3d-effect is irreal.

Re:it's impossible (1)

devjoe (88696) | more than 2 years ago | (#38806253)

This anon has it right. If you have two synchronized 2D films of the same thing from slightly different angles, then you can try to match objects in the two frames, and use that to determine the depth. You could just apply the red filter to one film, blue to the other, superimpose, and boom, it's like you're watching the two different films with your two different eyes, and if they were filmed with cameras set properly, it will actually look right. But it sounds like you only have one 2D film. Here, the best you can hope for is to identify different objects in the film and apply a depth to each one, try to match and track those objects across different frames and keep the depths consistent. If the objects are sitting on a flat floor and you can see their bases, or if you can see shadows of the objects, you may be able to use that info to determine depth. Otherwise, you have to guess, and the result will probably look poor.

Re:it's impossible (1)

networkBoy (774728) | more than 2 years ago | (#38808507)

In theory you can expand the image to 3d by clipping each object and vertically slicing it into layers (for a tree, or horizontally for a bench) then adding 3D effect to the layers, filling in where there is gaps and overlaying where needed, for each object in each frame of the film, then composit all the objects and masks back on to the scene. Now that you've spent ~100 hours on that your first frame is done, time to do the next, but now it's harder because you need to account for motion so your clipping masks are no the same AR as the previous frame and your fill/overlap layers changed as well since the perspective has changed. Wash rinse repeat for the rest of your life to finish a 10 minute clip.
possible to do right? yes.
practical? not at all.
-nB

Re:it's impossible (1)

JaredOfEuropa (526365) | more than 2 years ago | (#38808065)

It is possible. There are some algorithms that do this (semi-)automatically. Not sure how they work (perhaps using parallax from moving objects), but they do work, and I have seen the results. I came across a 3D version of one of the Star Wars movies, and I was quite impressed with the results from what is after all an automated process. The 3D in space and landscape scenes was pretty good. However closeups of talking faces revealed the weakness; the moving face confused the algorithm and the result was something that looked a bit like the shimmering produced by rising hot air.

Impressive from a technical point of view, but I wouldn't call the results suitable for the cinema or even for home viewing. 3D movies require the director and cameramen knowing their 3D stuff, you have to shoot specifically for 3D, and it is not easy. However, Cameron has shown that it can be done, and that it can add something to the movie rather than just being a gimmick with spears / body parts being pointed / thrown at the audience.

Waitaminute... (2)

CanHasDIY (1672858) | more than 2 years ago | (#38806071)

That's you, isn't it George Lucas?

Dammit, leave the original trilogy alone! The digital "remaster" was insulting enough!

Re:Waitaminute... (0)

Anonymous Coward | more than 2 years ago | (#38806721)

But if he does it in 3D he can prove Greedo shot first.

what papers to read (0)

Anonymous Coward | more than 2 years ago | (#38806137)

This is still an area of active research. For estimating 3d structure from video, "Shape and motion from image streams under orthography: a factorization method" is a good place to start. When you have a still camera and still scene, the best you can do is something like "Automatic photo pop-up." Unless you are thinking of some kind of semi-supervised approach?

Structure from Motion (0)

Anonymous Coward | more than 2 years ago | (#38806149)

My 2 ct's as a researcher in computer vision:

The problem is underconstrained. There is just no stable, proper way to extract depth from a single static image. That said, still or almost still scenes can never be converted properly.

What can be done (as in 'it might work sometimes, but will still give crappy results in general') is to extract depth between two frames from the movement of the camera, called structure from motion: find the epipolar configuration, i.e. the relative position, of the camera between the two frames using many feature points; then use corresponding points between both frames and triangulate them. The triangulation gives you 3D coordinates, which can be converted to a stereo image pair.

However, for fast movements, the scene will not be static between two frames, which messes up the triangulation. You need to additionally guess the movement to get rid of this, which is extremly tricky for non-linear camera and object movements, or for deformations. Also, your depth map will not be dense due to occlusion. All this leads to heavy, ugly artifacts. And again, remember, for no or very small camera movements, all this will fail - you cannot triangulate a point from one viewpoint.

An automatic conversion is thus destined to give bad results. It might, hower, be a starting point for a manual post-processing.

Re:Structure from Motion (1)

r2r2 (600813) | more than 2 years ago | (#38807535)

Very nice explanation.

Red/Cyan glasses (1)

AlphaBit (1244464) | more than 2 years ago | (#38806173)

You will want to avoid the old paper red/cyan glasses and go with the slightly more expensive plastic ones that are designed for LCD monitors and TVs. Otherwise be prepared for a LOT of ghosting. Also, nvidia makes platic red/cyan glasses that are designed to fit over regular glasses. You may also need to calibrate your monitor to make sure that red is really red and cyan is really cyan.

I was personally very surprised at how well red/cyan works. Of course the colors get a little muddle, but not as much as I had expected.

I bought these, btw http://www.amazon.com/Glasses-Pro-Ana-movies-Computers/dp/B0036NP3CS/ref=sr_1_1?ie=UTF8&qid=1327421067&sr=8-1 [amazon.com]

Re:Red/Cyan glasses (0)

Anonymous Coward | more than 2 years ago | (#38808537)

If you're going anaglyph, it's better to get green/magenta glasses. They have much better colour than red/cyan.

A big pain (0)

Anonymous Coward | more than 2 years ago | (#38806187)

I'm not familiar with the results of the AviSynth filter in the tutorial link above, but the process Hollywood uses is very labor intensive, involving manually rotoscoping objects and projection mapping. Basically you're cutting out objects and repositioning them in 3D space frame by frame. Here's a demo in Nuke (which is a costly piece of software) to give you a better idea of what the process is like.

http://lesterbanks.com/2011/05/using-nuke-for-2d-3d-stereo-conversion/

And even after all that work, films reconverted for 3D with this method tend to look pretty bad. The cardboard cutout effect is jarring.

You're missing critical information (1)

Anonymous Coward | more than 2 years ago | (#38806221)

You can't turn a two-dimensional photograph into 3D because the original has lost all the phase information that conveys needed info (e.g., "depth"). Similarly, you can't restore 2D sound to 3D, because the essential information isn't in the source recording that you'd need to "position" all the sound sources in 3D. In general, you can go from (N+1) to (N) dimensions, but you lose information. That means you can not automatically go from (N-1) dimensions to (N) without restoring that lost information...which wasn't recorded. Therefore, you'd have to synthesize every frame of video/sound to add the missing stuff, and you can't get it automagically, because the (N-1) version simply doesn't have the information you need to make the transformation.

Example. Set up an orchestra with a flutist positioned 20 feet above the main orchestra. 2D mics have picked up all sounds, but they have no sense of where, vertically, each musical instrument is located, because the two (or more) horizontally-dispersed stereo microphones are laterally displaced. You've have to add microphones that are positioned vertically to gather the phase information for 3D, but your recording has no such information.

Re:You're missing critical information (2)

sideslash (1865434) | more than 2 years ago | (#38806515)

Let's say you have a video camera poked out of the side window of your car, and you're driving down a road alongside a wide field. The field is sparsely populated with trees, and there are mountains far off in the background.

With the use of video in such a case, the depth information can be pretty accurately inferred from the parallax effect, due to the fact that your car (and camera) are moving along the road. It's a difficult problem, but by comparing frame with frame, an algorithm might piece a somewhat reasonable stereoscopic render of such a scene. There are many other scenes where that approach is futile, but your assertion that all depth is lost is not accurate for the case of (some) video (under ideal conditions). Let's not oversimplify the issue.

Re:You're missing critical information (1)

DreadPiratePizz (803402) | more than 2 years ago | (#38806831)

This is not as easy of a problem as you decide. Unless metadata was recorded with things like the focal length of the lens, there is no way to determine distance information from a 2D image. Even with the metadata, it is still a painstaking process that involves manually isolating every object in the frame, tracking those objects through the shot, and assigning a depth map to them. You then need to determine your interaxial distance - a wider interaxial increases the depth of the 3D effect, and then finally you need to determine your convergence - which is the plane on which the screen rests. When converting, these values may need to be keyframed.

There is a reason this costs $100,000 per minute of film to do professionally.

Re:You're missing critical information (2)

camperdave (969942) | more than 2 years ago | (#38808011)

There are a host of techniques to apply depth to a scene. Parallax from multiple camera angles is one. Vanishing point analysis is another. Prototype mapping (a human is going to be *this* shape, with *these* depths) and size of motion analysis (big motions are likely to be caused by objects closer to the camera) may also help.

However, the easiest way is to just shoot the thing in 3D in the first place.

Re:You're missing critical information (1)

Dogtanian (588974) | more than 2 years ago | (#38809673)

Let's say you have a video camera poked out of the side window of your car, and you're driving down a road alongside a wide field. The field is sparsely populated with trees, and there are mountains far off in the background. With the use of video in such a case, the depth information can be pretty accurately inferred from the parallax effect, due to the fact that your car (and camera) are moving along the road.

This is sort of how the Pulfrich effect [wikipedia.org] works anyway. You have a camera moving sideways while pointed at a relatively static scene. Any two frames a moderate fraction of a second apart will thus effectively have been taken from slightly different parallax positions, like twin stereoscopic 3D images.

The key is when watching this film on an ordinary 2D television you view it through a pair of glasses with one lens darker than the other. Due to the way the brain processes images, the eye viewing through the dark lens lags behind slightly and "sees" a frame slightly behind the one the other eye is currently processing.

So each eye gets a different frame- and as described above, if the camera is moving in the appropriate manner, each eye is getting a slightly different parallax view of the scene, giving a 3D effect.

They've used the Pulfrich effect for novelty TV shows, including a notoriously horrible Doctor Who "special" [wikipedia.org] in the early 90s.

Re:You're missing critical information (1)

Dogtanian (588974) | more than 2 years ago | (#38809749)

I forgot to add- if you have existing material that pans from (IIRC) right to left in the appropriate manner, you may already be able to get a 3D effect simply by viewing it through a pair of dirt-cheap Pulfrich glasses!

"Arduino sucks" (2)

sideslash (1865434) | more than 2 years ago | (#38806251)

Clarification -- Arduino doesn't suck, just paraphrasing the unfortunate mentality of a bunch of posters on this article. It is bewildering to me that on a "news for nerds" site, people are disparaging somebody from undertaking what could turn out to be a cool tech project, even if it is known in advance that the end result isn't going to be "Avatar". And even if the best of 3D is a bomb in the theater, that doesn't mean it isn't a lot of fun to play with, as a school project, etc. I enjoyed messing with this stuff in physics lab in college.

Contra my provocative subject, Arduino is an excellent choice for serious hobbyists. And similarly, there is nothing wrong with playing around with 3D video techniques and even being willing to try rolling one's own algorithm.

Get a (homebrew friendly) life, slashdotters!

(If the OP clarifies that he's working on a big Hollywood title, I'll take this back. Until then...)

Creating "3d" (4, Interesting)

Maximum Prophet (716608) | more than 2 years ago | (#38806577)

There was a recent NOVA episode about aerial photo reconnaissance during WWII. To make stereoscopic images, they'd fly the plane straight and level over the target. If they could take multiple pictures with 60% overlap, they could use two adjacent images to make one stereoscopic image that was good enough to tell a ship from a decoy.

Any motion picture where the camera pans side to side gives an opportunity to create a "3d" image. If an object moves across a still camera, you can also derive 3d information. (Also if it spins)

An interesting exercise would be to process a film, and make stereoscopic only what what can be done properly, and leave the rest flat. A scene would start out flat, then people and things would begin to jump out at you.

Re:Creating "3d" (1)

sideslash (1865434) | more than 2 years ago | (#38806791)

This.

Also (while I seriously doubt this applies to the OP), the world of 3D will be an interesting place when the CV and AI academic communities start recognizing a bunch more objects and create the ability to much more accurately annotate and infer 3D within a scene. Currently 3D can be added by a difficult manual process, which is certainly too time intensive to do thoroughly and well for anything movie-length, hence the annoyingly partial jobs we've seen up to now. We haven't yet seen the theoretical limits of 2D to 3D stereoscopic conversion (which don't get me wrong, the limits may remain unacceptable aesthetically; just saying that we haven't yet seen them).

Re:Creating "3d" (2)

camperdave (969942) | more than 2 years ago | (#38807399)

It depends on how they accomplished the pan. If it was by pivoting the camera on the stationary tripod, then it won't work. If it was by laterally moving the camera on a dolly or crane, then you've got something.

Re:Creating "3d" (1)

Forbman (794277) | more than 2 years ago | (#38807679)

But that's how stereographic photos work in general, whether one is using one camera or two (or more). This wasn't a new technique.

I think the real trick were those 3-D system they smuggled in, where they could pan around the area in virtual 3-D that let them see the V-2 rockets as they were on their launch pads and get a feel [sic] for them.

Not a gimmick, eventually (0)

Anonymous Coward | more than 2 years ago | (#38806355)

I can assure you that personal head mounted 3D displays are not a gimmick. Technology just hasn't caught up yet. Unfortunately, markets have to make the technology viable in order to progress, which often results in markets stagnating the progress a little more than we'd like (in order to maximize profits).

this can be done easily with ffmpeg and imagemagic (5, Informative)

nitrofurano (2560015) | more than 2 years ago | (#38806381)

this can be done easily with ffmpeg and imagemagick - you need two video sources, and from a ffmpeg script, extracting a picture sequence from both videos, one sequence from the left camera, and another from the right - with a bash script using imagemagick you will separate the colour channels from each frame: red from one camera, and green/blue from another - and having the separation done, you will join with imagemagick again the red channel picture frame from one and green/blue from another, into a new picture sequence, and when you have this sequence ready, you convert it into video again with ffmpeg - try googling for ffmpeg and imagemagick instruction arguments when coding this bash script

Re:this can be done easily with ffmpeg and imagema (1)

Kagetsuki (1620613) | more than 2 years ago | (#38806487)

THIS. Somebody mod parent up please.

Re:this can be done easily with ffmpeg and imagema (0)

Anonymous Coward | more than 2 years ago | (#38806911)

I assumed he was converting single video source into 3D - no mean feat.
If he has two, left and right, video sources he can place the video side by side and many 3D TVs will convert it on the fly, as will youtube.

Re:this can be done easily with ffmpeg and imagema (1)

Anonymous Coward | more than 2 years ago | (#38808967)

You dont simply want to filter the channels red vs green/blue. That creates terrible ghosting. Instead look up Dubois alforithm, its a linear projection from 6d colorspace to 3d colorspace, optimized for minimal ghosting using MSE. Finished matrices exist fro both red/cyan, green/magenta and amber/blue, available from Dubois homepage. Recently used this for a project, works great.

Re:this can be done easily with ffmpeg and imagema (0)

Anonymous Coward | more than 2 years ago | (#38809363)

Indeed.

We need another saying: Those who don't understand computers, are doomed to post questions on Slashdot. Poorly.

It's shocking, how many people today avoid the CLI and think that "having" to use it is something bad or uncool. (Quote "It's not 1989, you know?". My answer: Then why haven't you still not understood the point of having a computer??)

A computers' sole purpose is, to automate your work away. You do the big parts with programming... or let somebody do them for you. And you do the glue with scripts and the CLI.

Important note: Doesn't mean it can't be graphical and visual though.

But still, that's why the GUIs and desktop environments of today are so deeply wrong. Especially on a Unix-like OS.

If you want to see how it's done right, look at Maya (the 3D tool). It has a CLI, which you might not know. Similar to the Quake series console. Everything you do graphically, is a script command too. So you can just do stuff, open the console, mark the last n lines, and drag them to the shelf. Then you can edit your new script, add loops, variables, dialogs. All in a bash/TCL-like language or in Python.
And did I mention that the entire frontend you've been using to do this is made that way? Only the core/engine is hard-coded.
So it can be fully adapted to all needs and in big companies, it usually is.

Let's have *that* as a Linux UI. Let's make *everything*, be it UI-wise or whatever, also be a file. No exceptions.
How about that?

3D trickery... (1)

ThinkDifferently (853608) | more than 2 years ago | (#38806499)

...gives me a headache, especially the flickering kind.

my blueray player can. (1)

EETech1 (1179269) | more than 2 years ago | (#38806555)

My blueray player can simulate 3D from any 2D source (Panasonic DMP-BDT210) although I'm not exactly sure how it does it, or how good it looks. (no 3D tv) You might be able to talk someone into connecting one up at your favorite bigbox store for you if you acted interested in buying the blueray player, and wanted a demo of its conversion capabilities. This would at least give you a firsthand idea of how it will look to see if YOU think its worth it.

Re:my blueray player can. (0)

Anonymous Coward | more than 2 years ago | (#38806633)

It just guesses what sticks out where, and the result is a mess.

Here's a tip (-1)

Anonymous Coward | more than 2 years ago | (#38806587)

Use a real operating system instead of a toy.

imagemagick (1)

i.r.id10t (595143) | more than 2 years ago | (#38806611)

The convert utility in the imagemagick package does a good job of it with still images. I'd consider dumping your frames out as a series of images, running the convert utility on them, and then re-creating your movie.

I've also thought that taking that code in convert, merging it into VLC, and setting up VLC to grab from 2 cameras at once... with enough CPU and RAM, it could be come very close to real time 3d movie.

Luminosity (1)

dtdmrr (1136777) | more than 2 years ago | (#38806617)

You might want to look into luminosity based research. The brightness at each pixel may contain some information of the angle of the surface with respect to the camera and a light source. At some point that looked potentially promising. But of course the technique can fail pretty easily. Much of the work I've seen is based on trying to figure out how our brains do this all the time. Try closing one eye, see how 3D the world still looks (better than most 3D movies). You are going to have a tough challenge to beat that. But that doesn't mean its not worth trying.

How to convert your 2D display into 3D (2)

backslashdot (95548) | more than 2 years ago | (#38806759)

1. Display 2d images on a flat panel tv facing you
2. spin the display 45 degrees so that one edge is nearer to you the other edge
3. That's it --notice how pixels on one side are closer to you when the ones on the opposite edge are futher away from u spetially)you display is in 3D now.

None (0)

Anonymous Coward | more than 2 years ago | (#38806787)

There is no way to do this and have it be "good". The big studios with their gobs of money have been insisting on doing this, and it always looks like crap. You're not going to get any better result with cheap gear and software.

More likely to make people sick (1)

Ameryll (2390886) | more than 2 years ago | (#38806869)

2d->3d converted media is much more likely to make people feel sick or get headaches from the video than media recorded directly in 3d. There are two reasons for this. Firstly, because you lack some information. For instance if you look at a box that is obscuring your vision of the objects behind it in the real world, each eye has different information based on its perspective. (Try looking at something with one eye, then the other, and look at what changes behind the object). 2d media will only have the information for one eye, and you'll have to make up/fake out that second eye. Secondly, you're trying to fake out the depth cues and it's very hard to do right because you often don't have the depth buffer necessary to do it right.

Have to love Ask Slashdot... (1)

TheSkepticalOptimist (898384) | more than 2 years ago | (#38806879)

Standard Template:

I want to do _something_, but I do not know how to do _something_, how do I do _something_, provided I don't know how to or do not want to waste my time using Google.

Then a barrage of responses by people that don't really know how to do _something_, but surprisingly have a lot of opinion about _something_.

And then, of course, a smart *ss like myself pointing this out.

Re:Have to love Ask Slashdot... (1)

camperdave (969942) | more than 2 years ago | (#38807557)

Of course, there are also the one or two gems of insight from people who know how to do _something_, or have done _something_ in the past, which refute the barrage of responses by people that don't really know how to do _something_ (which may even be ways of doing _something_ that you were about to try) which can help you do _something_ yourself.

2D to 3D Algorithms (2)

r2r2 (600813) | more than 2 years ago | (#38807105)

I have not seen many replies on algorithms, so here is what I know from a researcher point of view.

In a few words: if you only have a 2D video, then it is a very hard computer vision problem [wikipedia.org] , that has not been solved on the research side.

There is an active benchmark [middlebury.edu] of disparity estimation algorithms (full bibliography at the end of the page). Those algorithms take two pictures and estimate a depth image. From this depth image, it is possible to reconstruct the scene in 3D (but you cannot see what's behind objects). From my experience, this class of algorithms do quite a bad job with real-life images, and have not been applied to video at all.

I've been using optical flows (see a related benchmark [middlebury.edu] ) for the development of an Android app (3D Camera [android.com] ) that converts pictures from 2D to 3D, without glasses (check it out!). The optical flow is a more general version of depth estimation (i.e. in any direction, not just left to right motion motion). It has been applied 3D conversion of videos with relative success, I can search for references if you are interested.

From my knowledge & experience, optical flows are the state of the art algorithms to convert 2D pictures/videos to 3D, but they are quite computationally intensive.

New Methods (1)

bmacs27 (1314285) | more than 2 years ago | (#38807319)

http://www.pnas.org/content/108/40/16849.short [pnas.org] It turns out there is information in a single static image about the magnitude and sign of defocus (if you know something about the camera). That information is carried largely by the shape of the power spectrum, and is the reason we are able to look at a single static 2d image and say "it's out of focus." Once you know the magnitude, sign of defocus and the optics of the camera, you have depth. Obviously there is a 2)... before 3) profit... but I think this is a promising approach.

Re:2D to 3D Algorithms (0)

Anonymous Coward | more than 2 years ago | (#38807481)

This is mostly correct.

Converting 2D video to 3D is performed via various techniques, primarily "structure from motion", but also "shape from shading".

In structure from motion, features in the images are tracked from video frame to video frame. Optical flow methods are one approach to this, though typically SLAM (simultaneous localization and mapping) systems will search for robust image features that can be reliably matched between views. The geometry of a moving camera is a well studied subject, and there is a solid theory behind how the motion of image points between frames can be converted into a model of the structure of the scene. Note that for this you need a moving camera - a static camera will not suffice.

In shape from shading, knowledge of the location of the light source is used, along with observation of the luminance of objects in the image and any shadows, to infer the structure of the scene. To my knowledge, this is much less used - certainly in real-world environments.

From single images, there are various publications which infer structure and 3D shape from lines in the images. These use assumptions about which regions of the image might represent planar surfaces and which lines might be parallel to infer the perspective of the scene, and from this, extract a sense of the 3D shape.

Algorithm? Um, no. (1)

Anonymous Coward | more than 2 years ago | (#38807185)

Despite what some PR hustling excitables might claim, stereoscopic conversion cannot be effectively automated at this time. Do people try it? Yes. Does it generate watchable results? Sometimes by accident, yes.

The thing is, a stereoscopic conversion done painstakingly frame by frame by a highly skilled compositing artist looks pretty bad. Any automated conversion process will be orders of magnitude worse.

What you need is a ton of really excellent rotoscoping (I send my jobs out to work farms in Russia) to separate all of the elements, and then a compositing application like After Effects or Nuke to offset the various layers along the Z (while scaling to retain size coherency). Now the fun part: fill in all the missing pixels your offset has made visible! A combination of displacement maps, cloning and hand-painted details should do the trick (this is the part that separates the men from the boys).

Your mileage may vary, but in ideal circumstances this is still a pretty hard trick to pull off without inducing headaches or making everything on screen look like cardboard flats.

1D to 2D (1)

Anonymous Coward | more than 2 years ago | (#38807287)

I'm waiting for the 1D to 2D algorithms to be perfected. I have this 1D sketch of the battle of Bull Run that I'd really like to get converted. Here's the 1D version: __________________________ ________________ ___ ____________ Hopefully Slashdot doesn't get a takedown notice. What will be really awesome is when all of these work together, so I can convert that 1D drawing into a 4D movie!

Hey, i work doing this kind of things (0)

Anonymous Coward | more than 2 years ago | (#38807503)

Here the steps:

With a normal 2D camera
you need to
1 Look for camera calibration (intrinsic and extrinsic)
2 Get some images
3 Make calibration with some things that you know the size and plane.
With the calibration you estimate 3D.
Get opencv (is the best free thing around) , it has some things

With kinect:
Get nestk and play with it.

Advise: is a pain 2d is a reduced dimension so you can not recover actual 3d but only estimate it and you will have to make assumptions.

I Don't give more details cause I hate writing from my phone.

Hope it helps

Leo

Using color channel mixing for fun and profit (0)

Anonymous Coward | more than 2 years ago | (#38807729)

The above comments assume that the IMAGE the OP wants to convert is 2d. What if he means that he's got a 2d DISPLAY, but a 3d image he wants on it?

That is possible. Sure he can't upgrade a single picture to a 3-d scene. Anyone with a brain can see that, and they wouldn't be posting here. So, assume he's got the data he needs already, but nothing to display it on. GO!

My recommendation is Adobe's gamma adjustment tool. Its packaged with all their products, and usually its meant to color correct a monitor to give an accurate picture of what an artist's work will look like across the various media they use (paper, glossy, screen, etc.) This is useful to you the 3D guy, because the analglyph glasses work through the use of colors.

The red side of the glasses filter out all green. The cyan side filters out all red. Either eye will get plenty of blue, so the first thing is to make sure you compensate for that by turning off the blue channel completely. That will ensure that only the colors your glasses can filter out are going to be displayed.

The next thing is to take both video streams (right and left) and convert them both to mono-chrome. There are a number of ways to do that, so do your research. Then, copy the red channel from one stream and use it to replace the red channel on the other. Delete the blue channel entirely.

Now you have a 3D anaglyph video playing on a 2d screen. PRESTO!

You cant. (1)

Dynedain (141758) | more than 2 years ago | (#38807931)

You can't just run a 2D video through an algorithm and magically get a 3D video.

You have to run the video through a compositing program (think Photoshop for video) and use that to chop and mask each scene and introduce parallax effects. Then (if your compositing program supports 3D space) you output the streams from two different virtual cameras so that you have 2 final videos that are synced and are from two different angles (one for each eye). At that point, it's trivial to encode them to whichever 3D video container format you want to deliver as your final output.

If you really want to learn how to do this, try it first using stills with Photoshop or the Gimp. Once you understand what's involved for creating a believable 3D scene out of a 2D image, you're ready to start learning how to use a video compositing app to do the same thing.

Be prepared to spend a lot of time on this.

Re:You cant. (1)

Dynedain (141758) | more than 2 years ago | (#38807975)

Also, if you really think you're up to the task of writing an algorithm, the place to start is reading up on all of the various SIGGRAPH research papers on image composition analysis and video processing.

realtime conversion? no such thing (1)

Speare (84249) | more than 2 years ago | (#38808047)

high quality 2D-to-3D realtime conversion

This is basically impossible, or will have horrible artifacts.

The current crop of movies with 2D-to-3D conversions still took significant human and artistic effort to achieve, even though the results are mediocre. For a given frame, for every pixel in 2D, SOMETHING has to decide how far away the subject depicted must be. That is, it has to INVENT the third dimensional value. Then this value is used to calculate two new 2D frame with parallax involved.

There's no computational way to achieve this INVENTION of the depth value with an arbitrary photograph, though. Any computational model will have big gaps in its ability. With enough computing power, you can perhaps identify visual markers in neighboring frames (say, the corner of a lampshade), solve for where the camera position must be relative to the markers, then use the depth of the solved markers to base all the other pixels (say, the lampshade versus the drapes). But that (1) takes significant solver time now, (2) requires a lot of hand-adjustments to discard inappropriate markers that upset the solver process with bad results, and (3) won't find anywhere near enough quality markers across the whole frame in fast-moving action scenes to fill in the rest of the data.

Some people get ill with the best 3D out there, others get ill as the quality of the 3D information degrades. The inconsistent results of any realtime method would likely be epilepsy- and nausea-inducing in a matter of seconds.

Separate L/R 3D Master (0)

Anonymous Coward | more than 2 years ago | (#38808257)

If you're creating a 3D master, it's better to render out to separate left and right streams, then use post production to convert it to whatever presentation format you need, anaglyph, R/L side by side or over/under, Field Sequential etc... If you master straight to anaglyph you're stuck with anaglyph. As for the actual 3D conversion itself, welcome to a whole world of rotoscope hurt (not impossible, but close).. unless you start from the beginning with a stereo camera rig (difficult) or do everything in a 3D rendering app and set up a standard stereo camera rig in that (easy).

No one really addressing the question (0)

Anonymous Coward | more than 2 years ago | (#38808617)

Yes 3D is a bad idea but that's not the question. No there are no affordable 3D conversion softwares. If you are after converting home movies to 3D there are consumer grade cameras available. Realize that most systems involve halving frames so if the camera boasts 1080P read the fine print because the effective resolution is half that. As far as converting old movies there's nothing out there. I'm fairly sure I could produce a 3D shot with After Effects given time but to do a whole movie that way would be insanity. As far as a passive solution it's impossible if the camera is locked off. You'd be asking the software to make creative judgements. Even the old colorizing software required an operator to make judgement calls how to color different parts. If they spent proper time on it the results were impressive but more often than not they rushed it through and the results were appalling. Until some one comes up with a 48 frame or 60 frame system 3D will continue to be obnoxious. The worst of the lot are action films that try to use narrow shutters to reduce motion blur. In a normal film is causes strobing. In a 3D film it's agonizing to watch. The actions scenes strobed so much in the new Underworld movie that half the time I couldn't tell what was going on. Why did I go to the 3D version? The local theater was only playing the standard version two times a day. Often there is only the 3D version these days which artificially creates demand. They'll be lucky if 3D lasts another 5 years before people stop going because a film is in 3D. In 10 years few if any films will be made in 3D. The only thing keeping it alive now is the fantasy that the studios can stump the pirates who are filming screens. Personally I'd prefer walking through a full body scanner before being forced to watch most 3D movies.

Misleading... (0)

Anonymous Coward | more than 2 years ago | (#38808931)

I have slashdot as an rss feed and when i saw 'convert 2D to 3D', my otaku mind just....

*sigh*

3DTV.com (0)

John Sokol (109591) | more than 2 years ago | (#38808941)

I friend of mine (former CEO of a startup I founded) asked me to write one.

He called and kept offering more each time. I actually spent some time investigating this and decided that it was a good way to give my self a stroke.

It's hard enough implementing and getting things right when you know what to do, with 2D to 3D there isn't even a clear algorithmic method to use, few papers and no examples of a good automated conversion. DDD seems about the best.

I must admit I've seen some decent human with software assist do a surprising good job but even that isn't nearly as good as a 3D camera or rendering CGI direct in to 3D.

John L. Sokol
videotechnology.com

Short Guide (0)

Anonymous Coward | more than 2 years ago | (#38809033)

Found some info here:

http://www.3dcombine.com/conversion.html

2D to 3D conversion is the process whereby existing 2D content is converted to 3D. There are a
number of different methods that can be employed, though none will produce the same effect as
recording in 3D in the first place. There are a number of reasons for this. A key issue is that some
information is missing. Try looking at an object in the distance that is partially obscured by one
nearby. Close each eye in turn. You'll see that more of the background is visible in one eye than the
other. If you only had the view from the more obscured eye, the extra background is missing and
would have to be extrapolated (invented) when creating the missing eye view...............

DVDFab? (1)

dnwheeler (443747) | more than 2 years ago | (#38809087)

You can try DVDFab from Fentao and see if that works for you: http://www.dvdfab.com/

The only viable algorithm is called "interns" (1)

Rui del-Negro (531098) | more than 2 years ago | (#38809297)

I work in post-production, and while some of the stereo-handling algorithms are impressive from a technical point of view (like the stuff in Eyeon Dimension and The Foundry's Ocula), and while I think stereo 3D is here to stay for video games (at least after consoles add some improvements to head tracking), I doubt it will be more than a passing fad for movies. It's simply not compatible enough with human vision, even when done properly (head movements spoil the effect, the difference between convergence point and focus plane puts stress on your eyes, etc.; it's as if someone nailed your head to the cameras). When I'm watching a movie, I'm a spectator, I don't feel any need to be "in" the movie; I'm fine with being an infinite distance way. Anything that makes watching the movie less comfortable is going to detract from the experience.

Anyway, although there are ways to extract 3D information from 2D image sequences (not from individual images), as done by camera trackers such as SynthEyes, PFTrack, etc., the result is a very low resolution point cloud, which is really only useful to calculate the camera position and / or track some scene features, not to create a usable stereoscopic image pair.

The only vaguely acceptable way to get stereo is to project the frames onto a (simplified) hand-made 3D model of the shot (typically a grid deformed by a displacement map), and then render it from two virtual cameras. This can take ages (to set up; rendering is quick) and is generally the kind of work you offload to some intern you don't like much. Even then, the results are generally less pleasant to watch than the original (mono) footage. If you're interested in seeing how this is done, search for "Stereo Conversion NAB" on YouTube, and you should find a few examples.

There is no way to convert individual frames from 2D to 3D in real time for the same reason that "digital zoom" can't show you text that was smaller than the sensor's pixels; the information is simply not there. You can, obviously, write an algorithm that adds made-up depth information to any image, just as you can write an algorithm that adds random text to zoomed images, but I doubt that would improve your movies in any way.

Machine learning (0)

Anonymous Coward | more than 2 years ago | (#38809725)

Right now, making 3D out of 2D is mostly manual work, but it need not be so in the future. Your brain can effortlessly extrapolate hidden sides of objects on a photo or film and reconstruct the depth field (it can also be confused easily by the "Devil's fork" or other such specially crafted pictures). There are lots of cues for this: lighting (shadows, glare), perspective features, sizes of known things etc.

In the future, this will be doable on an artificial brain. You can already do this with single photos, using artificial neural network. Just submit your photo to http://make3d.cs.cornell.edu/ and in a few seconds you'll get back a textured 3D model of the scene or a fly-through video, whichever you prefer. It is a awesome, I love AI (and welcome the overlords, of course)!

Maybe possible... (1)

marsgazer (2560133) | more than 2 years ago | (#38809761)

The problem isn't too hard if you are moving your camera sideways at an even speed since you could just use 1 frame for the left view, and a frame a short amount of time later for the right view. However, if the video camera is taking some unknown path then no 2 frames from the original video will in general create the correct parallax. Therefore, you would need to do a bundle adjust on the camera movement (computationally quite painful and not always reliable for arbitrary camera motion). Then comes the hard part of producing a close to 100% coverage dense 3D model at regular enough intervals to render new image frames with camera spacing and orientation to match human vision. Not impossible, but I think reliability of the currently available algorithms and computation time are the big problems.

For the best results... (1)

Stele (9443) | more than 2 years ago | (#38811051)

I write post-production software used to do this (and it runs on Linux!). The best results I've seen involve manually breaking each shot into dozens of layers, using rotoscoping. Each set of layers is exported as masks and imported into a compositing application where the images for the layers are projected onto the masks in 3D space. In some cases they build rough 3D models and project the layers onto the respective models. Now they can add a virtual camera and render the scene from both views. Then they bring the footage into a paint system and manually paint in the "missing" parts that now show up because of the change in camera angle. This has to be done for both the left and the right eye.

They have a room of 300 guys in India doing this for Titanic. But the results are INCREDIBLE.

Some automatic techniques involve rotoscoping a depth map by hand (or with a combination of some automated depth map generation, but this almost always has to be tweaked for good results), then using that to synthesize two new views from the 2D footage. Then to fill in the gaps they can use either an automated warping (which looks almost, but not quite, entirely not all right) or hand-painting again.

The upshot is it is a very very manual, labor-intensive process, with somewhat specialized tools. But when done well it looks amazing.

Load More Comments
Slashdot Account

Need an Account?

Forgot your password?

Don't worry, we never post anything without your permission.

Submission Text Formatting Tips

We support a small subset of HTML, namely these tags:

  • b
  • i
  • p
  • br
  • a
  • ol
  • ul
  • li
  • dl
  • dt
  • dd
  • em
  • strong
  • tt
  • blockquote
  • div
  • quote
  • ecode

"ecode" can be used for code snippets, for example:

<ecode>    while(1) { do_something(); } </ecode>
Create a Slashdot Account

Loading...