Beta

Slashdot: News for Nerds

×

Welcome to the Slashdot Beta site -- learn more here. Use the link in the footer or click here to return to the Classic version of Slashdot.

Thank you!

Before you choose to head back to the Classic look of the site, we'd appreciate it if you share your thoughts on the Beta; your feedback is what drives our ongoing development.

Beta is different and we value you taking the time to try it out. Please take a look at the changes we've made in Beta and  learn more about it. Thanks for reading, and for making the site better!

New Technique Creates 3D Images Through a Single Lens

Soulskill posted about a year ago | from the also-known-as-a-pirate's-eye-view dept.

Technology 56

Zothecula writes "A team at the Harvard School of Engineering and Applied Sciences (SEAS) has come up with a promising new way to create 3D images from a stationary camera or microscope with a single lens. Rather than expensive hardware, the technique uses a mathematical model to generate images with depth and could find use in a wide range of applications, from creating more compelling microscopy imaging to a more immersive experience in movie theaters."

cancel ×

56 comments

What they actually did (5, Informative)

Anonymous Coward | about a year ago | (#44502433)

"Harvard researchers have found a way to create 3D images by juxtaposing two images taken from the same angle but with different focus depths"

Re:What they actually did (5, Informative)

harvestsun (2948641) | about a year ago | (#44502755)

Except what they're actually doing has nothing to do with juxtaposition. They're inferring the angle of the light at each pixel, and then using that angle to dynamically construct new perspectives. The person who wrote the article on Gizmodo just didn't know what he was talking about.

not remotely new (0)

Anonymous Coward | about a year ago | (#44505111)

The technique is not even new. I know this because I wrote code to do exactly this back in 1994. And I didn't even event the basic idea of what I was doing.

Re:not remotely new (1)

wonkey_monkey (2592601) | about a year ago | (#44506303)

That's nothing. I invented the motorway in 1976.

Re:What they actually did (0)

Anonymous Coward | about a year ago | (#44505203)

I suppose that's to be expected from Gizmodo. They are too bias and lazy to do any real form of journalism but instead blog about things that are way out of their league. It's like getting anything positive from a Microsoft product review on their site, it just isn't going to happen.

Point is... (1)

djupedal (584558) | about a year ago | (#44502465)

We don't care too much how the wonks do it as long as it doesn't involve headgear beyond what moviegoers walk into the theater with.

Re:Point is... (1)

viperidaenz (2515578) | about a year ago | (#44502525)

Point is... this about about taking a 3D image from a single every-day consumer level DSLR camera at a single viewpoint, not projecting a 3D image for people to look at.

Re:Point is... (2)

hedwards (940851) | about a year ago | (#44502625)

You might not care about that, but I personally care more about the crap they show on screen. As long as what they're showing was shot in 2D and turned into 3D using computer manipulation, I'm not interested.

From the sound of this, they can use one lens to create an image that's effectively 3D and do so for the entire scene, rather than portions. That I'd consider seeing.

When they get that down, then worrying about the eye wear will make some sense. At this point the 3D just isn't good enough in most cases to waste money viewing it.

Re:Point is... (2)

wagnerrp (1305589) | about a year ago | (#44503045)

If you need special eye wear or need to stand in a certain position, it's not 3D, merely stereo.

Re:Point is... (2, Insightful)

hedwards (940851) | about a year ago | (#44503191)

No, that's 3D, 3D is when the eyes see slightly differing images and interpret that as a scene with depth.

The definition you're using is highly non-standard and completely misses the point. A movie will always require that you be sitting in the right place. Just as one doesn't typically watch a Broadway play from backstage. Or aren't those plays 3D?

Re:Point is... (0)

Anonymous Coward | about a year ago | (#44504877)

No, that's 3D, 3D is when the eyes see slightly differing images and interpret that as a scene with depth.

Sorry, but you are mistaken. A stereo pair is a couple of 2D stereo images, which means that it is not in fact a 3D scene. Your brain is tricked into perceiving it as a 3D scene because it relies (mostly) on stereoscopic vision for depth perception.

The definition you're using is highly non-standard and completely misses the point.

No, the movie industry's definition of "3D" is technically wrong. They should be calling it stereoscopic or something. A hologram is considered 3D, because it captures a continuous interference pattern representing any number of viewing angles on a 3D object.

Re:Point is... (1)

Trogre (513942) | about a year ago | (#44505881)

A hologram is considered 3D, because it captures a continuous interference pattern representing any number of viewing angles on a 3D object. ... but not the back of it, so it's not "true" 3D either.

Re:Point is... (1)

wagnerrp (1305589) | about a year ago | (#44507453)

Actually, a hologram captures and reproduces a light field, encoded in that interference pattern, so it reproduces the same exact light pattern emitted by a 3D object. It is "true" 3D, as true as you can get without a volumetric reproduction in free space.

Re:Point is... (0)

Anonymous Coward | about a year ago | (#44511983)

Actually it CAN show the back. I took an optics class in college, and there were holograms to play with. One hologram was of a pair of dice. The film was wrapped around a beaker with the dice inside, exposed with the laser's split beams, and developed. Wrap the developed film around a beaker and you could indeed walk around the beaker and see all sides of the dice.

Stereo isn't true 3D because it doesn't convey all the information, which is why it gives many people headaches. Stereoscopy isn't the only method the brain uses to determine depth -- close one eye and look at the desk in front of you, you still see distance. The eye's focus comes into play as well, which is why "3D" movies give some people headaches. The brain stereoscopically sees an object four feet away while the eyes are focusing ten feet away. This causes the eyestrain that causes the headaches.

You don't get that with holograms. Holograms are indeed true 3D.

Re:Point is... (1)

black3d (1648913) | about a year ago | (#44503469)

Right and wrong. If you're narrowing the definition to the source, you're correct, however if your brain interprets what it sees in three dimensions, then you're seeing 3D. Or simply: Projected Image - not 3D. Visualised image - 3D.

Re:Point is... (1)

wagnerrp (1305589) | about a year ago | (#44507471)

Your brain is confused, as it is getting some cues telling it it is viewing a 3D volume, and other cues telling it it is viewing a 2D plane, so you get headaches.

Re:Point is... (1)

jovius (974690) | about a year ago | (#44503087)

True. At least the movie studios have the material for reprocessing, but it lacks the information behind the objects. It's pretty funny that something shot as 3D is turned into a 2D projection for the screen. Everything is still flat. Curved screens are better, but still it's basically 2D...

Not Impressed (0)

Anonymous Coward | about a year ago | (#44502477)

The result does not impress me

Re:Not Impressed (1)

oodaloop (1229816) | about a year ago | (#44502521)

Well, you guys at Harvard can just quit wasting your time! It didn't impress Anonymous Coward. Back to the drawing board, I guess.

Re:Not Impressed (0)

Anonymous Coward | about a year ago | (#44502769)

Well, you guys at Harvard can just quit wasting your time! It didn't impress Anonymous Coward. Back to the drawing board, I guess.

Damn right. Obviously you didn't read the article. The '3d' here sucks my ballsack.

Re:Not Impressed (2)

Arkh89 (2870391) | about a year ago | (#44503671)

The result does not impress me too.
You can do TRUE 3D through a lens because you can get depth from interferences of the input light field over the finite aperture (\approx lens) size. That requires only A SINGLE IMAGE.
One example of this method is the Double Helix Point Spread Function (DHPSF) developed at Univ of Colorado by Pr. Piestun from a specific phase-mask :
http://3.bp.blogspot.com/-oX6BL98Bi8Y/TvstLYSp5jI/AAAAAAAABLI/fDKeFKvKWs0/s400/MS+Double-Helix+PSF.JPG [blogspot.com]
If you have the angle, you will have a measure of the depth. This estimation requires only one image. And this is more than ten years older.

Re:Not Impressed (1)

HuguesT (84078) | about a year ago | (#44505927)

I'm not sure how you can achieve the DHPSF from standard optical equipment. How do you measure the phase mask from a single image? Do you have a reference ?

Re:Not Impressed (2)

HuguesT (84078) | about a year ago | (#44506005)

I've done some quick research into what you suggest:

http://www.stanford.edu/group/moerner/sms_3Dsmacm.html

Basically you need a lot more than a single 2D image. You need a stack, and from the stack you can measure the angle you suggest. Your very own link illustrates this. What this technique allows you to do is to measure depth of point-like objects to a very good resolution, better that can be usually done with confocal imaging, but this is not easily applicable to other modalities.

I think you might be confused because in confocal microscopy, a "single image" is a 3D stack of optical sections.

Also you need to insert a phase mask into the focal plane of the optical equipment. This is not easily done.

Basically the author's technique referenced here is applicable to a different field of modalities, so it is differently useful.

Cheers.

Re:Not Impressed (1)

Arkh89 (2870391) | about a year ago | (#44567371)

(Sorry for the late answer)
No you don't need a stack. The illustration of the stack is just telling you that the PSF is rotating with depth by showing you multiple transverse slices of the EM field during propagation. But imaging particles (=point sources) at different distances from the aperture will produce Dirac's function convoluted by different PSF. This PSF encodes their distance (which is related to the curvature of the input wavefront).
As for the phase mask it must be inserted at the aperture (either entrance or exit pupil of the system) such that its plane and the image plane are Fourier conjugates. This is usually feasible and easy.

Re:Not Impressed (1)

HuguesT (84078) | about a year ago | (#44640179)

Sorry for the late answer too.

How then can you have in a single 2D frame two (or more) point sources, both in focus, with significant different depth from the aperture? In a system with a large numerical aperture like in confocal microscopy, I think you cannot, since by definition you have very low depth of field. Hence the angle only allows one to compute the depth with more precision. It seems this idea only works for point sources as well.

So overall, what you suggest is a different idea applicable to a different field. There are many other variants on the idea of using a mask at the aperture, leading to the field of computational photography, and particularly the subfield of coded aperture [wikipedia.org] , which is very much a field of interest in image processing right now.

Note that there doesn't exist a nice solution that works for every situation, as usual.

Re: Not Impressed (0)

Anonymous Coward | about a year ago | (#44504501)

But it impressed oodaloop. What a faggoty annonymous Internet handle.

Seen it before (0)

Anonymous Coward | about a year ago | (#44502481)

Re:Seen it before (1)

viperidaenz (2515578) | about a year ago | (#44502537)

You're confused.
This is taking 3D pictures with a non-3D camera. Not viewing 3D images.

Re:Seen it before (1)

oodaloop (1229816) | about a year ago | (#44502693)

If you had that 3D spectacle, you might have seen it whoosh over your head in amazing 3D.

wrong title. (1)

Anonymous Coward | about a year ago | (#44502583)

"students use focus stacking to make wobble gifs" would have really captured the meat of the article in a single sentence.

Re:wrong title. (1)

hutsell (1228828) | about a year ago | (#44503971)

After watching the demonstration in the YouTube video shown in the article, it definitely looks that way — the wobble gif technology was re-purposed; then declared as a promising new way to make 3D images. However, what would happen if a way could be devised to take (arbitrary values of) 100 depths per frame and run it at 24 fps? Is that the direction they're trying to go?

Although I didn't get the feeling from the article it was in their plans, most likely someone somewhere has consider the idea or a variation of it and determined if the extra effort would or wouldn't achieve something more than a cheesy gimmick.

This was on Gizmag 12 hours ago (0)

Anonymous Coward | about a year ago | (#44502627)

as were several other articles in the past few days on Slashdot...

You must be new here (0)

Anonymous Coward | about a year ago | (#44502829)

There is nothing new under the sun or on Slashdot.
I'm impressed it's ONLY 12 hours old.

Kaleidocamera can do this as well (1)

gringer (252588) | about a year ago | (#44502809)

Saarland University developed a reconfigurable camera add-on, the kaleidocam [mpi-inf.mpg.de] which can do 3D [youtube.com] as well as many other things. It allows you to take a single picture that is split by the device into multiple images that appear on the sensor as an array of smaller images. Possible functions include:

  • Multi-spectral imaging (including simulation of different white points and source lighting)
  • Light field imaging (3D, focal length change, depth of field change)
  • Polarised imaging (e.g. glass stress, pictures of smoke in natural light)

Of course, this requires a single shot using a fancy lens, whereas the Harvard technique needs two frames but "no unusual hardware or fancy lenses".

Depth Field Camera? (0)

Anonymous Coward | about a year ago | (#44503303)

Isn't this what Lytro [lytro.com] has been doing for a while now?

Re:Depth Field Camera? (1)

Rui del-Negro (531098) | about a year ago | (#44503589)

No. Lytro's software allows refocusing in post (at a huge cost in terms of resolution). It does not try to extract any parallax information from the image.

Re:Depth Field Camera? (1)

paskie (539112) | about a year ago | (#44503795)

Lytro's basic building part is an array of microlens, a rather expensive piece of hardware that also limits effective resolution dramatically. This is why the title here touts "single lens".

The array of microlens captures the light field, something that's used for computational focus in Lytro. However, capturing the lightfield in a microscopic imagery of translucent sample does allow you to post-fact adjust the viewing angle of the sample (to some degree), therefore do 3D imagery in microscopic scales.

Re:Depth Field Camera? (2)

dfghjk (711126) | about a year ago | (#44504213)

The irony of Lytro is that people fail to realize that depth of field is inherently a function of resolving power. When Lytro destroys resolving power to create alterable depth of field in post, all they are really doing is creating a means of artificially limiting depth of field, not a means of enhancing it in. With sufficiently good techniques for simulating OOF areas and image reduction the same ability could be offered without the immense penalties and with conventional optics (except no one would want it, just like Lytro). Lytro is truly the emperor's new clothes.

Lytro is a colossal waste of VC funding.

Re:Depth Field Camera? (1)

laird (2705) | about a year ago | (#44527953)

You don't understand what Lytro did. It's not about depth of field, it's about capturing the light rays (i.e. in different directions) rather than one set of pixels. One of the things you can do with that is alter depth of field, but you can also alter focus, so you're focusing nearer or further, or shifting perspective to look behind things. But since the Lytro is capturing light across a large sensor, not from two points, you can shift up/down/left/right and by variable amounts, not just flip between left and right eye view. So you can argue that you'd rather have the resolution without the refocusing, depth of field, perspective shift, etc., made possible by having the lightfield. But you should at least understand what you're saying you don't want.

Re:Depth Field Camera? (1)

Rui del-Negro (531098) | about a year ago | (#44517069)

Oh, I knew they could extract (very limited) parallax information from the plenoptic image data, I just didn't know they had coded that into their software (they didn't have it the last time I checked, they were only doing refocusing).

Re:Depth Field Camera? (1)

dfghjk (711126) | about a year ago | (#44504139)

Not true. http://www.wired.com/gadgetlab/2012/11/lytro-3d-feature/ [wired.com]

Lytro is desperate to find an application where their technology is relevant. Now they are claiming perspective shift as a feature they've "launched".

The world is really excited about 1 MP camera these days, especially one that can wiggle the perspective a few mm in each direction or reduce the depth of field, just not both at the same time. ;)

Re:Depth Field Camera? (1)

Rui del-Negro (531098) | about a year ago | (#44517033)

I stand corrected. Last time I'd checked out their software all it could do was refocus. Once they finally support simultaneous refocusing and wiggling (which is technically possible, by limiting the amount of each)... their cameras will still be just as useless.

Not exactly new, and pretty limited (3, Informative)

Rui del-Negro (531098) | about a year ago | (#44503631)

Having two lenses is not a requirement to capture stereoscopic images. It can be done with a single (big) lens, and two slightly different sensor locations. But you're limited by the distance between those two sensors, and a single large lens isn't necessarily cheaper or easier to use than two smaller ones.

What this system does is use the out-of-focus areas as a sort of "displaced" sensor - like moving the sensor within a small circle, still inside the projection cone of the lens - and therefore simulating two (or more) images captured at the edges of the lens.

But, unless the lens is wider than the distance between two eyes, you can't really use this to create realistic stereoscopic images at a macroscopic scale. The information is simply not there. Even if you can extract accurate depth information, that is not quite the same as 3D. A Z-buffer is not a 3D scene; it's not sufficient for functional stereoscopy.

Microscopy is a different matter. In fact, there are already several stereoscopic microscopes and endoscopes that use a single lens to capture two images (with offset sensors). Since the subject is very small, the parallax difference between the two images can be narrower than the width of the lens and still produce a good 3D effect. Scaling that up to macroscopic photography would require lenses wider than a human head.

Re:Not exactly new, and pretty limited (0)

Anonymous Coward | about a year ago | (#44504889)

>The information is simply not there

If you have a relatively small focal depth, it is. Google "depth from defocus [google.com] ".

Re:Not exactly new, and pretty limited (1)

Rui del-Negro (531098) | about a year ago | (#44516987)

No, it isn't. The only information you can get is the one from the light hitting the lens. That's effectively limited to parallax information between the edges of the lens (in reality, less than that, but let's pretend). In other words, as I wrote above, "unless the lens is wider than the distance between two eyes, you can't really use this to create realistic stereoscopic images at a macroscopic scale".

Re:Not exactly new, and pretty limited (0)

Anonymous Coward | about a year ago | (#44505059)

But maybe they could apply this technique to existing 3D cameras, then we'd have like.... 4 times the 3Ds!

Re:Not exactly new, and pretty limited (0)

Anonymous Coward | about a year ago | (#44505071)

Even if you can extract accurate depth information, that is not quite the same as 3D. A Z-buffer is not a 3D scene; it's not sufficient for functional stereoscopy

WWWRONG. A z-buffer IS a 3D scene. You are mistaken between a 3D image, and a 3D image which lets you change your viewing angle.

Yes you can make a stereoscopic image using nothing but a depth map. Two perspectives are only needed to show you whats on the side of an object, providing the ability to view at different angles.
HOWEVER 3D images do NOT require you to be able to change your viewing angle. This is currently what you see in 3D films.

Re:Not exactly new, and pretty limited (0)

Anonymous Coward | about a year ago | (#44507235)

Your eyes look at stuff from slightly different angles. Therefore a image with depth is not enought to accuratly construct a stereoscopic image (especially when objects are close to the eyes). Still it's possible to make an approximation, which will be good enought most of the times.

Re:Not exactly new, and pretty limited (0)

Anonymous Coward | about a year ago | (#44517011)

> A z-buffer IS a 3D scene.

That sentence alone makes your understanding of the subject pretty clear...

I remember reading a blog in 2006 (0)

Anonymous Coward | about a year ago | (#44503919)

where the man claimed to have made a DIY Lens that could capture 3d and a program that would create a displacement map out of it. His videos were pretty convincing and he could relight in post processing and such things. He'd tried to sell his lens but it was a failure. His blog is still active, here's the archives http://blogs.wefrag.com/divide/2006/02/ but all the links to the videos and pictures are dead.

astrophotography? (0)

Anonymous Coward | about a year ago | (#44505023)

I wonder if you could use this technique to form 3D pictures of various stellar phenomena, like say the crab nebula.

Re:astrophotography? (1)

HuguesT (84078) | about a year ago | (#44505945)

No, because the light rays coming from the crab nebula are all parallel.This relies on light coming from the sample at various angles to the lens. Essentially the sample must be close to the optical system.

In radio astronomy, you can get some 3D information from radio sources, because radiotelescopes can measure the phase directly.

Re:astrophotography? (1)

Rui del-Negro (531098) | about a year ago | (#44517045)

You could... if your lens was about the size of a galaxy. ;-)

Cool idea but... (2)

EmperorOfCanada (1332175) | about a year ago | (#44505089)

It is a cool idea but they are rotating the "3D" image about 1 degree. If they had even halfway good 3D data they could have rotated a whole lot more. My guess is that after 1 degree their "3D" turns into a spiky mess. Man I am getting sick of this popular science news, "Science has way to make flying cars a reality in 5 years."

I am not doubting that 3D information can be extracted from focal data, I am doubting that these guys can do it.

Non-paywalled version (1)

HuguesT (84078) | about a year ago | (#44505963)

A. Orth and K. B. Crozier, "Light field moment imaging", non paywalled version:

From Crozier's web page: http://crozier.seas.harvard.edu/publications-1/2013/76_Orth_OL_2013.pdf

So commentators here can be a little more informed about what these guys are really doing. From quick reading, I'm not an expert but this is different from the usual depth from defocus. It allows for some 3D information but not a lot, obviously. Still could be useful.

All the best.

Re:Non-paywalled version (2)

HuguesT (84078) | about a year ago | (#44506021)

Sorry, with the clicky:

non-paywalled version [harvard.edu] of the article.

Check for New Comments
Slashdot Account

Need an Account?

Forgot your password?

Don't worry, we never post anything without your permission.

Submission Text Formatting Tips

We support a small subset of HTML, namely these tags:

  • b
  • i
  • p
  • br
  • a
  • ol
  • ul
  • li
  • dl
  • dt
  • dd
  • em
  • strong
  • tt
  • blockquote
  • div
  • quote
  • ecode

"ecode" can be used for code snippets, for example:

<ecode>    while(1) { do_something(); } </ecode>
Create a Slashdot Account

Loading...