Beta
×

Welcome to the Slashdot Beta site -- learn more here. Use the link in the footer or click here to return to the Classic version of Slashdot.

Thank you!

Before you choose to head back to the Classic look of the site, we'd appreciate it if you share your thoughts on the Beta; your feedback is what drives our ongoing development.

Beta is different and we value you taking the time to try it out. Please take a look at the changes we've made in Beta and  learn more about it. Thanks for reading, and for making the site better!

Camera Lets You Shift Focus After Shooting

samzenpus posted more than 3 years ago | from the we-can-focus-later dept.

Technology 155

Zothecula writes "For those of us who grew up with film cameras, even the most basic digital cameras can still seem a little bit magical. The ability to instantly see how your shots turned out, then delete the ones you don't want and manipulate the ones you like, is something we would have killed for. Well, light field cameras could be to today's digital cameras, what digital was to film. Among other things, they allow users to selectively shift focus between various objects in a picture, after it's been taken. While the technology has so far been inaccessible to most of us, that is set to change, with the upcoming release of Lytro's consumer light field camera."

Sorry! There are no comments related to the filter you selected.

fitrsrsitf (2, Funny)

Anonymous Coward | more than 3 years ago | (#36534678)

if you refocus that comment it reads as "first".

Personally, I'm still waiting for.. (1)

intellitech (1912116) | more than 3 years ago | (#36534696)

A holocam.

Re:Personally, I'm still waiting for.. (0)

Anonymous Coward | more than 3 years ago | (#36534894)

A holocam.

Er, that's essentially what this is.

Re:Personally, I'm still waiting for.. (3, Interesting)

Rei (128717) | more than 3 years ago | (#36535096)

From the sound of it, it basically sounds like it captures a picture with a Z-buffer -- that is, they capture spatial information and angular information, and the angular information is then matched up to find corresponding objects to assess depth for refocusing.

One nifty thing about pictures and videos with built-in Z-buffers would be that it'd be really easy to render into them. Heck, you could have a camera with a built-in GPU that could do it in realtime as you're recording. :)

One step beyond the Z-buffer would be to then do a reverse perspective transformation and extract polygonal information from the scene. This would be of particular use in video recording, where people moving allows the camera to see what's behind them, hidden sides of their bodies, etc. Then you could not only refocus your image, but outright move the camera around in the scene. Of course, if we get to that point, then we'll start seeing increasing demand for cameras that always capture 360-degree panoramas. Combine this with built-in GPS and timestamping and auto-networking of images (within whatever privacy constraints are specified by the camera's owners), and the meshes captured from different angles by people who don't even know each other could be merged into a more complete scene. In busy areas, you could have a full 3d recreation of said area at any point in time. :) "Let's do a flyover along this path in Times Square on this date at this time..."

Re:Personally, I'm still waiting for.. (1)

PhantomHarlock (189617) | more than 3 years ago | (#36535974)

given the demo in the video from the article, you should be able to both create an image-based 3D scene from the data as well as generate a stereo pair. On his laptop he was wiggling the perspective back and forth within the limits of the area that the camera captured. Being able to change the focus and depth of field automatically means that you've got a little bit of what's behind every edge.

Re:Personally, I'm still waiting for.. (1)

Rei (128717) | more than 3 years ago | (#36536702)

Unfortunately, the video isn't working for me on this computer. However, the ability to change depth of focus in post merely requires a blurring algorithm that's selective by z-buffer value. No new information is needed. It does require that the whole image be "sharp", of course.

Re:Personally, I'm still waiting for.. (1)

im_thatoneguy (819432) | more than 3 years ago | (#36536288)

A Z-Buffer is a possible output from this camera but it does way cooler than just depth information. The problem with pure depth is that if you have something like a chainlink fence you don't know what's behind it if it's in focus. With this camera it captures 'around' the chain-link fence and sees what's behind it so that you can throw it out of focus.

Re:Personally, I'm still waiting for.. (1)

Rei (128717) | more than 3 years ago | (#36536652)

Where on Earth did you get that? How is light supposed to travel around obstructions?

omg! (3, Funny)

Tolkien (664315) | more than 3 years ago | (#36534732)

Enhance.

Re:omg! (1)

Anonymous Coward | more than 3 years ago | (#36534786)

CSI's magical license plates are now possible!

Re:omg! (1)

theshowmecanuck (703852) | more than 3 years ago | (#36536354)

The license plate on the black car didn't look like it was any clearer when zoomed in. Nor the writing on the signs. It seems more like it is taken with an extreme depth of field and then the system selectively focuses/defocuses areas of the picture. I very likely could be wrong, but that is how it seems to me from just this small demo.

Re:omg! (1)

tedgyz (515156) | more than 3 years ago | (#36535546)

Enhance.

This is too funny! As soon as I read your comment, I realized Bladerunner was WAY ahead of it's time. It is still one of the best renditions of a future dystopia on film. We all wish it were like Star Trek, but the truth is far more grim.

The real question is, are we already living in a dystopia? The world is pretty F'ed up.

Interesting. (1)

black soap (2201626) | more than 3 years ago | (#36534736)

For all the data it collects, does it do full spectrum or just 3 colors of light? Polarization after the fact? I wonder how long this will be "at least a year away." If it is the real, I can think of lots of scientific applications more useful than a consumer camera.

Re:Interesting. (1)

Relyx (52619) | more than 3 years ago | (#36534804)

The underlying concept and algorithms are real, and no doubt there are many proofs-of-concept in existence. Whether the technology can be commercialised in a year though seems a bit of stretch. I am willing to be proved wrong, of course - sounds very cool!

Re:Interesting. (1)

black soap (2201626) | more than 3 years ago | (#36534980)

If it can be adapted for spectral spectrographic data (especially beyond human-visible wavelengths), and polarization, I'll be happy to have my skepticism proven unfounded. Until then, this sounds too sci-fi.

Re:Interesting. (4, Informative)

X0563511 (793323) | more than 3 years ago | (#36535228)

Read this paper [stanford.edu] (or at least skim it) - these are called plenoptic cameras.

It doesn't do any particular voodoo. I suppose you could distill it down to the point where the camera is (in function) a compound eye.

Re:Interesting. (1)

kcitren (72383) | more than 3 years ago | (#36535234)

Look at Plenoptic Cameras and Integral imaging for current real world implementations. Computational imaging is real and heavy used [in certain areas].

Re:Interesting. (0)

Obfuscant (592200) | more than 3 years ago | (#36535308)

Until then, this sounds too sci-fi.

This sounds like marketing nonsense to me. From one of their "how this works" pages:

The light field fully defines how a scene appears. It is the amount of light traveling in every direction through every point in space -- it's all the light rays in a scene. Conventional cameras cannot record the light field.

And if you know anything about a camera lens, neither can this "light field camera". Any light ray that doesn't travel towards the camera lens cannot be recorded by that lens. Any lens. Of "all the light rays in a scene", very few travel exactly the correct direction. (You can calculate what percentage by choosing a point in the image and determining what percentage of the whole sphere the solid angle of your lens intercepts.)

I looked at the "focus stacking" links found elsewhere. All I can say about that is -- if the watch in the picture had been actually running, it would not have helped the part of the image that was moving. Why not just use the correct depth of field to start with? Its not like your watch is going to go blind if you have to add illumination so you can get a tiny aperture.

Re:Interesting. (2)

swalve (1980968) | more than 3 years ago | (#36535576)

When you focus a camera and change the aperture, you filter out some of the information coming into it. (roughly) This captures all that info. Somehow. The idea isn't to be a good photographer, but to capture more information about a scene.

Re:Interesting. (1)

Obfuscant (592200) | more than 3 years ago | (#36535894)

I read the paper linked to elsewhere, which was posted after I commented originally. Fascinating. I still say that the company website is marketing hype and patently false when it implies that it captures all the light rays. They imply this not only when they compare their product to a normal camera (the section I quoted), but later when they use the phrase "full light field" in describing what they process into the final image.

The paper clearly shows that it does not capture any more of the rays than any other camera. What it does is exchange resolution for information about where each proto-pixel is focusing, either in front of, at, or behind the image plane (microlens array.) And from that it can "refocus" the image.

So, maybe the camera itself is real, but the hype on the website is still hype. I found the "picture gallery" especially unimpressive since the only "change of focus" I saw was when the small images moved to the center. Maybe this has something to do with not having flash on this system, but if you need flash to look at pictures something is horribly wrong. If these pictures are going to be printed, maybe I need to develop "flash paper" to print them on. Excuse me while I go file a patent...

Re:Interesting. (1)

bkpark (1253468) | more than 3 years ago | (#36536624)

And if you know anything about a camera lens, neither can this "light field camera". Any light ray that doesn't travel towards the camera lens cannot be recorded by that lens. Any lens. Of "all the light rays in a scene", very few travel exactly the correct direction. (You can calculate what percentage by choosing a point in the image and determining what percentage of the whole sphere the solid angle of your lens intercepts.)

I don't think they are claiming to build a full 3D model of the subject (that would indeed be sci-fi). I do think they are claiming to use additional information usually discarded by conventional light sensors (i.e. CCD), i.e. something that corresponds to the radius of curvature [wikipedia.org] (I'm more familiar with the concept in the laser-beam setup, so I don't know exactly how that translates when you consider a diffuse light source that most objects present, but the paper abstract talks about light rays, like the rays in geometric optics).

I can see how that additional information can be used to re-focus, since with that additional information you can completely characterize the light source and know how far it is; the 3D thing (in the video, you can see some features that either hide behind the railing or come out), I'm not so sure.

Re:Interesting. (1)

Relyx (52619) | more than 3 years ago | (#36534870)

I would hazard a guess that it records just three colours of light. After all, the underlying digital sensor is based on existing technology found in modern cameras.

Re:Interesting. (1)

eigenstates (1364441) | more than 3 years ago | (#36535652)

I don't believe that it is. From the cursory second reading of the paper- it's a new type of sensor.

Re:Interesting. (2)

Obfuscant (592200) | more than 3 years ago | (#36535996)

I don't believe that it is. From the cursory second reading of the paper- it's a new type of sensor.

The paper says that the sensor was a Kodak KAF-16802CE. http://www.datasheetarchive.com/KAF-16802CE-datasheet.html#datasheets [datasheetarchive.com] is the datasheet for this chip, and it appears to be a stock Kodak CCD sensor. Nothing particularly new about it at all. The CE part implies it is a color filtered version.

The new part is the microlens array bolted on the front.

Re:Interesting. (1)

Threni (635302) | more than 3 years ago | (#36535756)

Existing sensors typically* record one colour of light - they just have filters over them. That's why you have to take megapixel counts with a grain of salt; there may be 9 megapixels, but they're divided into r/g/b components and then interpolated into a fuzzy image which is then sharpened in software to make up for all the f**king about.

*yeah, they don't all do it - just the ones everyone actually uses.

Re:Interesting. (1)

Kenoli (934612) | more than 3 years ago | (#36535052)

I wonder how long this will be "at least a year away."

According to the video in the article, the company is releasing "a competitively priced consumer camera" in 2011, ie no more than six months from now.

Re:Interesting. (1)

X0563511 (793323) | more than 3 years ago | (#36535094)

Well, it's a lot easier to commercialize something we already have [wikipedia.org] ...

Re:Interesting. (3, Informative)

X0563511 (793323) | more than 3 years ago | (#36535140)

... demonstated to be a working principle [stanford.edu] .

The paper includes graphics and formulas... a fuck load more detail than the story link given to us...

Re:Interesting. (1)

shmlco (594907) | more than 3 years ago | (#36535896)

So why isn't this technology in the public domain? The basic research was done at Standford, and IIRC they get about $1.5 billion in federal research grants...

Re:Interesting. (3, Informative)

marcansoft (727665) | more than 3 years ago | (#36535166)

It's called a Plenoptic Camera [wikipedia.org] . You put a bunch of microlenses on top of a regular sensor. Each lens is the equivalent of a single 2D image pixel, but the many sensor pixels under it capture several variations of that pixel in the light field. Then you can apply different mapping algorithms to go from that sub-array to the final pixel, refocusing the image, changing the perspective slightly, etc. So color-wise it's just a regular camera. What you get is an extra two spatial dimensions (the image contains 4 dimensions of information instead of 2).

Of course, the drawback is that you lose a lot of spatial resolution since you're dividing down the sensor resolution by a constant. I doubt they can do anything interesting with less than 6x5 pixels per lens, so a 25 megapixel camera suddenly takes 1 megapixel images at best. The Wiki article does mention a new trick that overcomes this to some extent though, so I'm not sure what the final product will be capable of.

I want it all (1)

wbean (222522) | more than 3 years ago | (#36534754)

What I want to know is, if they can focus at any point in the picture - and it looks as though they can, the interactive graphic is amazing - then why not just have the whole thing in focus at once. Infinite depth of field. If you wanted a shallow depth of field for artistic purposes, you could presumably add that later too. Neat.

Re:I want it all (1)

Relyx (52619) | more than 3 years ago | (#36534822)

I think the depth of field in the demo is just there to accentuate the idea that you can focus on different areas. As you say, I am sure you could produce a version with a very deep depth of field if so desired.

Re:I want it all (1)

Anonymous Coward | more than 3 years ago | (#36534956)

and that is the definition of a 'pinhole camera' -- however, you need a long exposure time....

Re:I want it all (2)

Psychotria (953670) | more than 3 years ago | (#36534976)

Have you ever tried to reduce depth of field (DOF) to a photo that has too much (for artistic purposes) DOF? It's not easy at all. If you had a pair or more of images, of the same subject, from a slightly different viewpoint (i.e. of the kind you'd take for "stereoscopic" photography) it might be easier because at least then you had some additional cues as to distance from the imaging plane of various objects within the scene, and using that it should be possible to create software to uses those cues to refine the DOF, however I don't think it would be perfect. Doing it by hand (e.g. in Photoshop) is possible but time consuming and very difficult to do right (there are a lot of approximations you have to make). The easiest, and best IMO, way of achieving your desired DOF is to do it with the camera at the time you're taking the photo.

Back to the article, I actually don't understand how the process reported could work. To record light the recording medium (e.g. CCD or CMOS sensor) has to have the light fall on it and this implies focus. Possibly it somehow also records the direction of light to allow focus manipulation post-capture. Or possibly it takes multiple shallow DOF images at once. I wish the "article" had more details.

Re:I want it all (5, Informative)

pjt33 (739471) | more than 3 years ago | (#36535138)

The website about the camera doesn't have enough details, either, but this paper [stanford.edu] does give a reasonable idea of what's going on.

Re:I want it all (2)

Psychotria (953670) | more than 3 years ago | (#36535196)

Thank you! And I was just going to post a reply to my own message wondering aloud if they manipulated the light using at the microlens level. Seem that this is exactly what they're doing

[quote]This is achieved by inserting a microlens array between the sensor and main lens, creating a plenoptic camera.[/quote]

That would still only give several (two, maybe three depending on the array) planes of focus, though, and at a sacrifice of resolution. Still, pretty cool idea.

Re:I want it all (1)

tlhIngan (30335) | more than 3 years ago | (#36535188)

Back to the article, I actually don't understand how the process reported could work. To record light the recording medium (e.g. CCD or CMOS sensor) has to have the light fall on it and this implies focus. Possibly it somehow also records the direction of light to allow focus manipulation post-capture. Or possibly it takes multiple shallow DOF images at once. I wish the "article" had more details.

ISTR a little ago there was a demonstration of a camera that basically used a honeycomb lens like that of a fly. Each little lens took a slightly different angle of the scene, and software processing generated the final image, letting you selectively focus on an image after it was taken.

Not sure if it was this company or another... it was a tech demo only though.

Re:I want it all (2)

vlm (69642) | more than 3 years ago | (#36535220)

Have you ever tried to reduce depth of field (DOF) to a photo that has too much (for artistic purposes) DOF? It's not easy at all

Bonus! Artiste types love to brag/complain about how difficult/expensive their work was to make.

The non-artsy types don't really care about technical quality or anything other than getting a tolerably viewable "subject standing next to cultural item"

Re:I want it all (2)

cruff (171569) | more than 3 years ago | (#36535066)

... then why not just have the whole thing in focus at once. Infinite depth of field.

I watched the video and I believe the guy being interviewed said you can do just that.

They give you focus everywhere in software (0)

Anonymous Coward | more than 3 years ago | (#36535122)

In their video demo, they let you pick the focal depth or you can hit the button that makes everything focused. The software can pick different portions of the image and apply the necessary focal depth.

Re:I want it all (1)

Anonymous Coward | more than 3 years ago | (#36535650)

Yes, you can have that. A plenoptic camera records many different directions of light rays for each pixel.

You select which rays you want to use later. So you can select close focus or faraway focus.
Or you can select the rays that correspond to f/22 and get nice DOF.

Re:I want it all (0)

Anonymous Coward | more than 3 years ago | (#36535796)

There are algorithms that could implement that, if it can't be done automatically given the algorithms already needed to produce an image from the camera data, it could at worst case be run through a local contrast detection and image fusion algorithm using a number of different focal points.

Re:I want it all (1)

Anonymous Coward | more than 3 years ago | (#36536194)

Watch the interview with Ren Ng (on techcrunch). They CAN focus the whole image.

Fake (-1, Troll)

geekpornfan (2297012) | more than 3 years ago | (#36534794)

Its like all these programs [aeonity.com] that do software focus.
That can only fake the effect, not really sharpen the picture

Re:Fake (1)

pushing-robot (1037830) | more than 3 years ago | (#36534836)

goatse.

Re:Fake (0)

Anonymous Coward | more than 3 years ago | (#36534838)

Don't click his link. It's the Goatse.cx image.

Re:Fake (1)

Tsar (536185) | more than 3 years ago | (#36535054)

Don't click his link. It's the Goatse.cx image.

Also known as the poor man's basilisk [infinityplus.co.uk] .

Re:Fake (4, Informative)

wickerprints (1094741) | more than 3 years ago | (#36535098)

No. This is known as plenoptic imaging, and the basic idea behind it is to use an array of microlenses positioned at the image plane, which causes the underlying group of pixels for a given microlens to "see" a different portion of the scene, much in the way that an insect's compound eyes work. Using some mathematics, you can then reconstruct the full scene over a range of focusing distances.

The problem with this approach, which many astute photographers pointed out when we read the original research paper on the topic (authored by the same guy running this company), is that it requires an imaging sensor with extremely high pixel density, yet the resulting images have relatively low resolution. This is because you are essentially splitting up the light coming through the main lens into many, many smaller images which tile the sensor. So you might need, say, a 500-megapixel sensor to capture a 5-megapixel plenoptic image.

Although Canon last year announced the development of a prototype 120-megapixel APS-H image sensor (with a pixel density rivaling that of recent digital compact point-and-shoot cameras, just on a wafer about 20x the area), it is clear that we are nowhere near the densities required to achieve satisfactory results with light field imaging. Furthermore, you cannot increase pixel density indefinitely, because the pixels obviously cannot be made smaller than the wavelength of the light it is intended to capture. And even if you could approach this theoretical limit, you would have significant obstacles to overcome, such as maintaining acceptable noise and dynamic range performance, as well as the processing power needed to record and store that much data. On top of that, there are optical constraints--the system would be limited to relatively slow f-numbers. It would not work for, say, f/2 or faster, due to the structure of the microlenses.

In summary, this is more or less some clever marketing and selective advertisement to increase the hype over the idea. In practice, any such camera would have extremely low resolution by today's standards. The prototype that the paper's author made had a resolution that was a fraction of that of a typical webcam; a production model is extremely unlikely to achieve better than 1-2 megapixel resolution.

Re:Fake (1)

swalve (1980968) | more than 3 years ago | (#36535630)

Transparent sensors. Every picture is a 3d array instead of a 2d array.

Re:Fake (1)

Obfuscant (592200) | more than 3 years ago | (#36536088)

Transparent sensors. Every picture is a 3d array instead of a 2d array.

Foveon [foveon.com]

Re:Fake (2)

ColdWetDog (752185) | more than 3 years ago | (#36536140)

According to Thom Hogan [bythom.com] ,

... the prototype required a 16mp sensor array to produce a 90kp image. Some similar relationship is expected for a production camera.

Less than a 1 megapixel image. That's pretty small - would be OK for web viewing but not for printing. However, unless you 'stack' the images together to get a very large depth of field (which would often look very unreal), printing the image would not get you much aside from deciding what the focal plane would be.

A web gallery, however, would allow you to move the focus in and out at will (as shown in the examples) and might be more commercially viable. Hogan's main complaint is that they will have to sell a metric butload of them to make a profit and that would be hard to do as a one trick, low resolution pony. I'd love a higher resolution version for macrophotography but I guess I will just do plain old focus stacking for a while longer.

Probably not a good consumer product. (1)

purpledinoz (573045) | more than 3 years ago | (#36534814)

I have a small point and shoot camera, and I rarely ever have the problem that my photos are out of focus. Blurry photos on evening and night shots is the most common problem I have. Not to say this technology sucks, but I doubt that you can get the average consumer to pay double the price for this feature. However, there are probably tons of other uses that this technology might have (in more profitable areas). Maybe for security cameras, or unmanned vehicles.

Re:Probably not a good consumer product. (1)

Relyx (52619) | more than 3 years ago | (#36534918)

The smaller sensor, the greater the depth of field and therefore the easier it is to get sharp images. I would imagine your point and shoot has a pretty small sensor. A professional DSLR on the other hand has a much larger chip.

Re:Probably not a good consumer product. (1)

BrianRoach (614397) | more than 3 years ago | (#36535046)

Exactly. I shoot motorsports with a canon DSLR (D20) and a 400mm lens. Not the same thing as pulling out a little point and shoot and pressing the button ;)

Re:Probably not a good consumer product. (1)

afidel (530433) | more than 3 years ago | (#36535342)

And it's not just sensor size, the larger the magnification of the lens the shallower the DOF (generally). The DOF on my 150-500 at 400-500mm is really shallow making shots of anything moving in less than perfect sunlight fairly difficult.

Re:Probably not a good consumer product. (1)

lahvak (69490) | more than 3 years ago | (#36535030)

Small point and shoot cameras have very small sensor and a lens with a small focal distance. That combination means that they have very large depth of field, which means that on a typical picture, everything or almost everything is in focus. That can be an advantage, but it can also be a disadvantage if you want to for example "isolate" an object by focusing on it, and having it show sharp and focused against blurry background.

Re:Probably not a good consumer product. (1)

aeortiz (1498977) | more than 3 years ago | (#36535102)

The price will determine its sucess. Consumers don't care about the tech aspect, only the price, convenience and how they are introduced to the idea.

That said, the technology is similar to insect compound eyes. It uses multiple tiny lenses directly in front of the sensor to create many tiny copies of the main image. It then uses software to determine the vectors of the light hitting it to correct lens aberration, defocus, noise, and uses parallax to determine depth (it has 3D capability). Because it captures more light, it also requires a much shorter shutter time, by using a much greater aperture, without the blurriness.

The downside: it uses many pixels on the sensor to determine a single pixel in the final image. Each microlens becomes a single pixel.

The dissertation by the CEO is not too difficult to follow. I would recommend anyone who likes CS and Photography to read it on the lytro website: http://www.lytro.com/science_inside [lytro.com] (see page 4)

Re:Probably not a good consumer product. (2)

timeOday (582209) | more than 3 years ago | (#36535644)

It seems to me this might eventually be cheaper than a conventional focus, because it doesn't require any moving parts. Obviously it's not cheaper yet, but usually solid state electronics end up being cheaper than complex mechanical assemblies (pop open an old tape walkman sometime and check out the choreography of moving parts as you push "play").

Assuming the microarray isn't part of the lens, you could seemingly reduce the cost and complexity of big telephoto lenses by a lot, which are the most expensive part of any good setup.

Re:Probably not a good consumer product. (1)

tippen (704534) | more than 3 years ago | (#36535986)

That misses the point of this technology. One of the big things that separates fantastic photos from p&s snapshots is shallow depth of field. Having the subject tack sharp and in focus with everything else melting away in beautiful bokeh.

duplicate! duplicate! (0)

Anonymous Coward | more than 3 years ago | (#36534828)

Old News! this is just another plenoptic camera... booring.. so 2006.

Focus stacking (1)

BWJones (18351) | more than 3 years ago | (#36534874)

Conceptually, its a little like focus stacking http://prometheus.med.utah.edu/~bwjones/2009/03/focus-stacking/ [utah.edu] only with a compound lens that does all the exposures at once. More examples of focus stacking here: http://prometheus.med.utah.edu/~bwjones/tag/focus-stacking/ [utah.edu]

Re:Focus stacking (1)

lahvak (69490) | more than 3 years ago | (#36535068)

I did not read the details, but the example pictures they provided did seem to have several distinct planes of focus that you could choose. With the size of the pictures, I couldn't tell whether the focus changes if you select two objects that are actually fairly close to each other, but it didn't seem so to me.

Re:Focus stacking (1)

Psychotria (953670) | more than 3 years ago | (#36535078)

I use focus stacking for my microscopy, however does (or could) this method "scale up" to objects, or parts of objects, that span a much greater distance (i.e. beyond the mm or sub-mm range I have experience with (you're in a better position to answer this than me I think, judging from your post history). I'm asking because I know when I stack say 50 images each with a depth of field of 0.5mm to create an image with ~25mm (just as an example) alignment problems become problematic (I'm not talking about stacking by hand, but using software) and I often have to manually adjust the alignment and transformations by hand to get the best image.

Re:Focus stacking (1)

Psychotria (953670) | more than 3 years ago | (#36535114)

Just adding to my above comment, those numbers I used as an example are not typical. More often than not the final DOF I am after is probably 1mm maximum and each photo in the stack has way less than 0.5mm DOF.

Re:Focus stacking (1)

ColdWetDog (752185) | more than 3 years ago | (#36536208)

This system could well work for that, but as has been pointed out, you either use resolution or you scale up to a large, expensive sensor (16 MP sensor giving roughly a 1 MP image). Depending on the various tradeoffs it might be something Zeiss or Nikon would kick out (for a nice chunk of change, of course).

Re:Focus stacking (1)

BWJones (18351) | more than 3 years ago | (#36535142)

Absolutely. The algorithms and principles are the same. The issue is that it tends to be more useful when your plane of focus (depth of field) is limited as it in in microscopy. You can experiment with this with an SLR camera by selecting an aperture wide open (f/1.2, 1.4 or 1.8 on a 50mm lens for instance). Take pictures of things close, mid and far away and stack the images. Works great.

As for alignment, Photoshop CS5 contains algorithms that also automatically align your images. Very useful.

Re:Focus stacking (1)

Psychotria (953670) | more than 3 years ago | (#36535272)

Thanks. That is now my experiment for the day ;-)

Big tradeoff (1)

Rothron the Wise (171030) | more than 3 years ago | (#36534898)

The first product will probably be a DSLR-sized sensor with mobile phone-type image sensor density. They are trading in a lot of pixels for this feature. You'll need 100 megapixel sensors to end up with usable image sizes as one microlens covers many sensor cells. It will be interesting to see how low light noise artifacts will look as there is bound to be a lot of them with such high sensor density.

Re:Big tradeoff (1)

StripedCow (776465) | more than 3 years ago | (#36535290)

I don't know how this technology works, but I can also imagine they may take several pictures with a normal CCD, with the lens going through a series of steps. Then in software, they may recombine the images, and compensate for the long shutter time by some kind of smart algorithm that tries to compensate for movement.

Re:Big tradeoff (2)

ka9dgx (72702) | more than 3 years ago | (#36535300)

Because a big sensor with a microlens array could be calibrated, you could use Richardson-Lucy deconvolution [wikipedia.org] to recover almost all of the raw resolution of the original sensor if the computing resources are available, in a given plane of focus.

Operation (0)

Anonymous Coward | more than 3 years ago | (#36534934)

The only downfall of this is that it reduces the resolution considerably of the camera sensor.

The camera works by placing an array of micro lenses in front of the image sensor. This allows you to record the direction of the light was traveling in addition to just its luminance. In effect you can mathematically change the focus of the image by selecting what directional light rays are incorporated into the photo.

You can read the CEO's thesis here
http://www.lytro.com/renng-thesis.pdf

Re:Operation (1)

yossie (93792) | more than 3 years ago | (#36535076)

From what I recall of reading about this a year+ ago, the same tech would allow for mathematically changing focus, zoom, pan, tilt in real time on the signal from the camera or post-fact on a recording of the same signal. So, basically, a flat non-moving sensor can now emulate a PZT camera. I can imagine that with a spherical lens or one of those weird mirrors that lets a regular camera catch a 360 image, it should be possible to make a near holocam. Imagine a movie shot like this and glasses that allow you to focus on different depths at will (maybe by watching your eyes?) it would be a "real" 3d experience, none of that fuzzy background you get on stereo 3d movies.

Re:Operation (1)

tibit (1762298) | more than 3 years ago | (#36535378)

The problem is this: this is not how movies work. It'll cost a fortune to make a movie where you can "look around". Shots are usually planned in detail, so if something is out of focus and is a prop/set, it's way cheaper that way. Even in CG movies, there are still digital props, sets, etc, and they are planned according to the needs of script and director's ideas. The level of detail varies and usually is only enough to do the job, doing otherwise would be a waste of money -- if it doesn't end on film, it's a waste.

Here, suddenly, you'll need sets with way more detail, all props will be subject to potential scrutiny even if artistically they're not very important, and it'll be way more work for continuity and to make sure there's no guffaws in the recorded material.

I don't see it happening, not without a major mental shift in the moviemaking business.

Headline should be (0)

Anonymous Coward | more than 3 years ago | (#36534978)

Camera lets you suck 50 000 000$ from VCs and disappear.

faked demo (0)

Voltara (6334) | more than 3 years ago | (#36535000)

If you examine their demo file, you'll find the 5 static JPEGs inside.

http://cdn01.lytro.com/media/lytro/lyt-37/lyt-37.lfp [lytro.com]

What exactly was the point of that "demo"?

Re:faked demo (1)

EvanED (569694) | more than 3 years ago | (#36535120)

I think you're confusing presentation with the source data. If those five JPEGs are generated from a single light-field exposure, then the demo isn't faked. (Or at least, is faked to the same extent that a typical camera shot is faked because the sensor data needs to be processed before it's displayed to you.)

Re:faked demo (1)

tibit (1762298) | more than 3 years ago | (#36535398)

So what, you expected them to recode their algorithms in flash, send you a source image that's dozens of megabytes in size, and have you wait tens of seconds, possibly minutes, while the whole thing is recalculated after each click? Haha.

Re:faked demo (1)

LBArrettAnderson (655246) | more than 3 years ago | (#36535532)

And if it weren't a "faked demo," how exactly would the images be presented to you?

Re:faked demo (1)

flimflammer (956759) | more than 3 years ago | (#36535554)

Are you really that dense? Did you think they were going to create a full fledged web viewer for files of god knows what size so you could get the genuine experience over what 5 demo images could show you?

useful for movies! (2)

StripedCow (776465) | more than 3 years ago | (#36535084)

For making movies, this would be very useful. Because when taking a movie, it is generally quite difficult to keep focus.

Re:useful for movies! (1)

tibit (1762298) | more than 3 years ago | (#36535412)

Yes, but the way it'll be used is that synthetic focus will be applied during post production / editing, and it will end up as a "regular" film. IOW: nothing interactive about the end product, when used for movies.

Re:useful for movies! (0)

Anonymous Coward | more than 3 years ago | (#36535568)

No it's not, you just keep one hand on the focus ring on your lens, and adjust as necessary. It's why many video cameras only have black and white screens in the viewfinders - easier to see whether or not the image is in focus if you ignore the chroma information. Maybe this is hard on terrible consumer cameras, and some prosumer cameras without focus rings, but if you have a lens with a focus ring and a zoom ring, you're pretty much good to go.

Doing it by hand, with only a single normal camera (1)

ka9dgx (72702) | more than 3 years ago | (#36535184)

I've been doing this for a few years, with one camera taking many views, since I first found out about the research they were doing at Stanford. Here are some scenes around Chicago [flickr.com] which are composites of many photos to generate a synthetic focus. The idea is to capture the scene from many slightly different points of view, and to capture all of the parallax information, which then yields depth.

I haven't be able to make it happen, but it should be possible to combine N pictures to get a bit less than N times the normal resolution. If you had 100 photos that were 8 megapixels each, you should be able to composite them into a 100 megapixel image with the right alignment and extrapolation algorithms.

Re:Doing it by hand, with only a single normal cam (1)

ceoyoyo (59147) | more than 3 years ago | (#36535926)

"I haven't be able to make it happen, but it should be possible to combine N pictures to get a bit less than N times the normal resolution. If you had 100 photos that were 8 megapixels each, you should be able to composite them into a 100 megapixel image with the right alignment and extrapolation algorithms."

No, you can't. Using super-resolution and an expensive mount that can shift the picture by EXACTLY half a pixel (or a quarter, or an eighth), you can get better resolution out of multiple shots, but the technique is severely limited in practice. If you get a factor of two you're doing well.

Unless you're talking about using a longer lens, taking multiple pictures and stitching them together. That's trivial.

"hyper stereo" cameras (4, Informative)

peter303 (12292) | more than 3 years ago | (#36535212)

There has been a fair amount of computer science research over the last decade over what you could do if you took a picture with a plane of cameras instead of just one or two. The resulting dataset is called a "light field". You can re-composite the pixels to change depth of focus, look around or through occluding obstacles, dynamically change point of view, etc. As digital webcams became dirt cheap people started building these hyper-cameras and experimenting with them. people learned you could relatively interesting things with small arrays of 4 or 5 squared cameras. Later on they discovered you do this with one camera, with a multi-part lense, then reconfigure the output pixels in the computer in real time. I've seen all these systems demo'ed at SIGGRAPH over the years. Now someone appears to be commercializing one.

I think the infamous bullet-dodging scene in the first Matrix movie was a type of hyper-stereo camera, a row of them albeit. The output lightfield was reconfigured expand point-of-view into time.

Re: "hyper stereo" cameras (0)

Anonymous Coward | more than 3 years ago | (#36535254)

The "bullet time" scenes were just a bunch of regular photo cameras triggered at the same time, or in very quick succession. Then they turn them into video frames and interpolate a bit to create the effect. Of course, that's one way to capture more of a light field than with a single camera, but it wasn't any kind of fancy light-field tech and didn't use any particularly special processing.

Re: "hyper stereo" cameras (1)

Psyborgue (699890) | more than 3 years ago | (#36536104)

So basically what you're saying is you take an array of pinhole cameras, interpolate the array of images further, use the differences to generate depth, and then apply a post process? Or is it actually doing some resampling... like using the array of cameras as "film"? I was skeptical at first on hearing about this (they make it seem like a single camera), but now it makes sense. Clever.

Re: "hyper stereo" cameras (1)

peter303 (12292) | more than 3 years ago | (#36536520)

A desired output image may be (1) just one of the cameras, (2) a mathematical operation on a subset of cameras, or (3) a mathematical operation on all the cameras. I recall changing the focus is an weighted integral of all the cameras with the weighting kernal a function of depth and camera position. I'd have have to google "synthetic aperture" and "Marc Levoy" of Stanford for the paper. His research summery lists many of the light field algorithms with references to other work.

They do more than just extended depth of field! (0)

Anonymous Coward | more than 3 years ago | (#36535250)

I have had a rather enjoyable experience playing with a light field camera ( www.raytrix.de/index.php/id-4d-cameras.html ) and this article really undersells their capabilities! Not only can you dynamically refocus the image you can also alter perspective, slightly change the angle of view and change the depth of field... This video shows some of this pretty well: http://www.youtube.com/watch?v=9H7yx31yslM

Lots of red flags, little tech (2)

erice (13380) | more than 3 years ago | (#36535296)

All the information is about the implications but not about how it actually works or the trade offs required to get there. They also seems to going directly to the consumer. There are only two reasons to bypass big spending pros and prosumers when introducing new technology:

  • 1) The technology is useless for those who know was they are doing (face recognition) or
  • 2) The quality of the result is significantly lower than existing tech without compelling advantages for those who know what they are doing.

My guess is #2. Exploding the pixel count of the sensor would make the product outrageously expensive. Clearly they are not doing that. So that means the quality suffers as finely adjustable optical focus is replaced by coarse digital focus achievable from the available sensors. We are probably getting camera phone level results. Good enough for Facebook but not something you want to print.

Re:Lots of red flags, little tech (1)

jet_silver (27654) | more than 3 years ago | (#36535764)

Look at page 4 of this: http://www.lytro.com/science_inside. You can read the founder's Ph.D. dissertation and I guarantee you'll get your geek on if you can follow it. It's a really excellent piece of work, and at the same time it is written in such a pleasant style that it keeps you curious and interested.

Re:Lots of red flags, little tech (1)

yodleboy (982200) | more than 3 years ago | (#36535908)

what this really means is that they know where the real market is. Good enough for Facebook is probably good enough for 90% of the people shopping for cameras.

Don't get me wrong, I've got the whole DSLR thing and still have my medium format film gear, but sometimes I just want to whip out a small camera to get a shot of the kids and it would be nice to know that no matter what, I'll have an in focus shot now or in post processing.

kinect + infinite depth of field + post processing (0)

Anonymous Coward | more than 3 years ago | (#36535684)

this sounds like something that can be done cheaply with a kinect mod

Re:kinect + infinite depth of field + post process (1)

ldbapp (1316555) | more than 3 years ago | (#36536378)

To a degree. Kinect has limited range on depth sensor. Camera image is from exactly one POV, so you can't get the micro-movements you can with their camera.

You can do this yourself with your own camera (0)

Anonymous Coward | more than 3 years ago | (#36535710)

You just need to open up your lens and put in a coded aperture, and the rest is all software. See the paper Image and Depth from a Conventional Camera with a Coded Aperture [mit.edu] , and especially check out the last several slides from the supplementary file [mit.edu] where they take a single picture and refocus it at a new focal length.

Interesting.... (0)

Anonymous Coward | more than 3 years ago | (#36535786)

So I can buy a camera that is likely to be very expensive and take a picture that I can selectively focus on my computer after the fact.

Ok, but will the image always look as bad as the one on the page that was linked? Because I have a webcam that takes better pictures than that.
If my Canon point-and-shoot took pictures that bad I would send it in for repair.... or buy a new one since it was only a hundred bucks.

Guaranteed that I would send in either of my dSLRs in for repair. One of them is OLD (6mp) and still takes much better pictures. Of course it has the disadvantage of me having to know what I am taking a picture of when I take a picture.....

Yeah, generally speaking I see this as "tech for tech's sake" rather than anything actually useful.

Large focal length trick (0)

Anonymous Coward | more than 3 years ago | (#36535860)

Take a shot with a large focal length then blar around surrounding focus point!!! Simple stuff

Vancouver Rioters (0)

Anonymous Coward | more than 3 years ago | (#36536052)

Lookout!

Link to the CEO's PhD thesis... (1)

karbonKid (902236) | more than 3 years ago | (#36536056)

...which explains the tech and its application in wonderful detail. http://www.lytro.com/renng-thesis.pdf [lytro.com]

I'd like one please (1)

randy of the redwood (1565519) | more than 3 years ago | (#36536124)

This could be a real boon to photography, even for us photo snobs who like to take very small depth of field pictures for the artistic effects. Sensors are getting to the point where they are being restricted by the granularity of the glass, so we seem to have pixels to spare compared to the viewing medium (mostly our PC screens these days) - http://www.dpreview.com/news/1008/10082410canon120mpsensor.asp [dpreview.com]

It would be great if these two technologies can dovetail in a way that I can get a high resolution (6-8 megapixel equivalent in current terms) picture with the ability to pick both my depth of field and focal point post processing.

Load More Comments
Slashdot Login

Need an Account?

Forgot your password?