Beta
×

Welcome to the Slashdot Beta site -- learn more here. Use the link in the footer or click here to return to the Classic version of Slashdot.

Thank you!

Before you choose to head back to the Classic look of the site, we'd appreciate it if you share your thoughts on the Beta; your feedback is what drives our ongoing development.

Beta is different and we value you taking the time to try it out. Please take a look at the changes we've made in Beta and  learn more about it. Thanks for reading, and for making the site better!

Capturing 3D Surfaces Simply With a Flash Camera

timothy posted about 6 years ago | from the more-depth-than-I've-got dept.

Graphics 131

MojoKid writes with this excerpt from Hot Hardware (linking to a video demonstration): "Creating 3D maps and worlds can be extremely labor intensive and time consuming. Also, the final result might not be all that accurate or realistic. A new technique developed by scientists at The University of Manchester's School of Computer Science and Dolby Canada, however, might make capturing depth and textures for 3D surfaces as simple as shooting two pictures with a digital camera — one with flash and one without. First an image of a surface is captured without flash. The problem is that the different colors of a surface also reflect light differently, making it difficult to determine if the brightness difference is a function of depth or color. By taking a second photo with flash, however, the accurate colors of all visible portions of the surface can be captured. The two captured images essentially become a reflectance map (albedo) and a depth map (height field)."

cancel ×

131 comments

Sorry! There are no comments related to the filter you selected.

Amateurs. (5, Funny)

bigtallmofo (695287) | about 6 years ago | (#24768391)

Creating 3D maps and worlds can be extremely labor intensive and time consuming.

Bah! I completed my last project in exactly 6 days and used nothing but voice commands. It turned out so well I sat on my couch and ate Cheetos the entire next day. Today, there are over 6 billion users and we're only now starting to run into scalability issues.

-God

.

Re:Amateurs. (4, Funny)

sm62704 (957197) | about 6 years ago | (#24768687)

Yeah, but look at how bloated your operating systemn is!

Re:Amateurs. (1)

DarthJohn (1160097) | about 6 years ago | (#24774023)

It's actually fairly elegant [wikipedia.org] .

Re:Amateurs. (5, Funny)

clarkkent09 (1104833) | about 6 years ago | (#24768797)

I hear your support is terrible though. People practically have to beg on their knees to get their problems solved

Re:Amateurs. (0)

Anonymous Coward | about 6 years ago | (#24768897)

That's right. Also, I'd like to file quite a few bug reports, don't even know where to begin...

Re:Amateurs. (3, Funny)

Tophe (853490) | about 6 years ago | (#24769665)

It's not a bug, it's a FEATURE!

There's a reason for that... (3, Informative)

m.ducharme (1082683) | about 6 years ago | (#24769923)

Obligatory XKCD [xkcd.com]

Re:Amateurs. (1)

HungSoLow (809760) | about 6 years ago | (#24773243)

And yet everyone is still on call waiting listening to that awful background noise (preachers)

Re:Amateurs. (5, Funny)

eln (21727) | about 6 years ago | (#24768823)

Your project is a case study in bad management, though. Sure, you completed the whole thing in six days, but what are we left with? Documentation that's cryptic at best, and literally billions of bugs.

Re:Amateurs. (4, Funny)

ajlitt (19055) | about 6 years ago | (#24768847)

And don't get me started on that unhandled divide-by-zero exception!

Re:Amateurs. (4, Funny)

gnick (1211984) | about 6 years ago | (#24770501)

The divide-by-zero exception is hardly fair. How can he fix a bug that we can't even replicate? As soon as the LHC comes on-line, we can file an official bug report. Until then, let him off the hook.

Re:Amateurs. (4, Funny)

famebait (450028) | about 6 years ago | (#24769443)

He does at least seem to fix hacking vulnerabilities though. According to accounts there used to be a lot more magic about only a few centuries ago. Or maybe the talent just matured and moved over to the more challenging but reliable fileds of reverse engineering and repurpousing the apparrently intentional features.

If only similar attention was directed to safety...

Re:Amateurs. (1)

Dekker3D (989692) | about 6 years ago | (#24769855)

heh, good one! no mod points here at the moment, though.

but you have to admit, at least this world doesn't have script kiddies. you can't take someone else's hacks and use them yourself like you can for any microsoft system. gotta love that little bit of attention to detail, i mean.. imagine what would happen if everyone could just telekinese some deadly shit on anyone else's head.
chaos, mayhem and no more politicians.
yup, great job, god!

now, folks, please excuse me while i go look for some exploits.

Re:Amateurs. (2, Funny)

SlipperHat (1185737) | about 6 years ago | (#24769623)

Sorry about the docs, but we kicked Satan out of the team when he started getting all arrogant about who's better than who and what not. He's been trying to hack the system ever since.

Re:Amateurs. (2, Funny)

Oxen (879661) | about 6 years ago | (#24769907)

And don't get me started on the all of the exploits you left open. Sure, you provided each of us with our own anti-virus system, but if your project had been well thought out there would be no possibility of viruses and worms to begin with.

Re:Amateurs. (1)

Gyga (873992) | about 6 years ago | (#24770849)

Viruses and worms allow for optimization of the entire unit, sub-par pieces are deactivated and recycled (a very long process).

Re:Amateurs. (4, Funny)

Anonymous Coward | about 6 years ago | (#24768871)

Gameplay sucks, just one endless grind.

Re:Amateurs. (5, Funny)

spun (1352) | about 6 years ago | (#24769011)

Obviously, you haven't unlocked the right minigame. It's a short game, but it makes grinding fun.

Oh, you make it sound so easy... (4, Funny)

GameboyRMH (1153867) | about 6 years ago | (#24769667)

Unfortunately unlocking the minigame can be nearly impossible if you have the wrong arbitrarily-assigned game character. Of course you could modify your character and change your character's gear to make it a little easier, but that's even more work and expense and doesn't make a big difference. There's also a way to pay your way into one minigame session but you'll have to be discreet about it unless you want to start another minigame that involves a lot of not-fun stuff like carefully balancing a slippery bar of soap.

Re:Oh, you make it sound so easy... (1)

Firehed (942385) | about 6 years ago | (#24770293)

The warning in your sig thoroughly disregarded, that post is a perfect example of why most slashdotters aren't playing that minigame.

Re:Amateurs. (2, Funny)

Anonymous Coward | about 6 years ago | (#24769603)

But the graphics are excellent!

Re:Amateurs. (2, Funny)

gnick (1211984) | about 6 years ago | (#24770557)

I used to think so too. But once I got my HDTV set up, the resolution on my back yard's just not that impressive.

Re:Amateurs. (2, Funny)

SlipperHat (1185737) | about 6 years ago | (#24769763)

Whatever you do, don't unlock marriage. You can't grind anymore, every night you lose all your gold, and you take damage if you make emotes the opposite sex.

Re:Amateurs. (1)

lord_sarpedon (917201) | about 6 years ago | (#24769861)

I'm taking Improved Daydreaming next patch. They dropped Advanced Sexology from the Codemonkey tree - got to get through the day _somehow_ after all.

NERF!

Re:Amateurs. (1)

h.ross.perot (1050420) | about 6 years ago | (#24768989)

Right.. this universe of yours.. "It was a bit of a botch job, you know.." And what about the map of all the holes? Came in pretty handy.. http://en.wikipedia.org/wiki/Time_Bandits [wikipedia.org]

FTW - Flash Camera? (0)

Anonymous Coward | about 6 years ago | (#24768991)

First it's barnacles on the web and now cameras. Is there anything that annoying Flash crap won't infect? ... (mutters to self about the good old days ...)

Re:Amateurs. (0)

Anonymous Coward | about 6 years ago | (#24770525)

It turned out so well I sat on my couch and ate Cheetos the entire next day.

Yeah, but now you have an orange dong

Re:Amateurs. (1)

ZarathustraDK (1291688) | about 6 years ago | (#24770881)

Today, there are over 6 billion users and we're only now starting to run into scalability issues.

Pfff...beta-software. When can we expect the final version? Word has it Duke Nukem Forever is coming out soon, imagine the PR-nightmare if you're slower than them, you'd be the laughing-stock of Slashdot. Oh wait...

Re:Amateurs. (0)

Anonymous Coward | about 6 years ago | (#24775045)

Damn. Is this a record for the most posts in a single continuous thread modded funny?

If you make enough simplifying assumptions... (5, Interesting)

jeffb (2.718) (1189693) | about 6 years ago | (#24768409)

...all sorts of problems become simple. I'd love to take a picture with some mirrors, some windows, maybe a reflective sign or two in the background, and see the funhouse effects that result. Oh, and don't forget emissive elements (lamps), which will appear to recede to infinity.

Re:If you make enough simplifying assumptions... (3, Insightful)

Squapper (787068) | about 6 years ago | (#24768799)

Yeah, this only seams to work with lamertian surfaces in flat-lit enviroments.

That's not the biggest problem though, i am a 3d-artist, and it's a pain to try to make a tiling texture map out of a picture containing more than three channels, due to stupid limitations in all 2d applications.
It's often more efficient to first make the color texture tile, then create a heightmap from that data. I guess that's why they are targetting scientific applications such as archeology, that requires more accuracy, and also employs less skilled 3d-artists (no offense).

Re:If you make enough simplifying assumptions... (1)

collywally (1223456) | about 6 years ago | (#24769065)

Try using a compositing program. Something like Nuke will enable you to paint on all the layers using a OpenEXR file. It's kind of a cheat but it can be done.

Re:If you make enough simplifying assumptions... (0)

Anonymous Coward | about 6 years ago | (#24771267)

Emissive elements will be virtually identical regardless of whether or not the flash is applied, and regardless of the flash's direction.

Quite old news (4, Informative)

gardyloo (512791) | about 6 years ago | (#24768411)

Slashdot (can't be bothered to find it) had a story several years ago about the (then old!) technique of capturing complicated 3D objects, such as car engines, by using two flash images, each with the flash located in slightly different locations. Threshholding the difference between the images gives very nice edge detection, along with very accurate depth information.

A project I'm working on uses the technique to capture information about arrowheads/spearheads.

Re:Quite old news (4, Informative)

jellomizer (103300) | about 6 years ago | (#24768495)

But this time the camera stays fixed and there is one without flash and the other with it. Allowing for 3D Cameras to be made on the cheap by just a firmware upgrade (one click of the camera takes 2 shots 1 without flash the next with. Your way is different as it requires the camera to have 2 flash thus needed the making of new cameras.

Re:Quite old news (1)

gardyloo (512791) | about 6 years ago | (#24768719)

You're right -- my way requires two flashes (it really doesn't, but we found it slightly more effective that way). The old slashdot article which I mention (but don't reference) also talked about only needing one camera. I think that it said that Chilton's Repair Manuals was using both techniques to produce their series of DVDs. Of course, I could be really wrong!

Re:Quite old news (1)

Firehed (942385) | about 6 years ago | (#24770515)

That won't work nearly as well unless you know the location and intensity of the ambient lighting sources for the non-flash image. In theory, you could make a fairly simple system that has two strobes (one on either side of the lens) that are powerful enough to overcome ambient, and use that slight difference to map out the texture (though for maximum effect, you'd really want them offset from the lens by 45 or so). The advantage to that approach is that you could fire off two shots in such quick succession that you could get something fairly accurate without use of a tripod, whereas a flash/no flash shot requires the latter to burn in the ambient to a point where you'll have a usable image, which will fail miserably indoors without a tripod.

It's a price you pay for simplicity versus effectiveness. As a photography lighting geek, I've got the equipment to rig things up quite easily with two strobes at known locations and get a quite effective texture map. I could do it relying on ambient for one of the shots (and losing most of the texture with an on-camera strobe for the other) with any random gear I could pick up, but by nature it can't be as effective nor as accurate. Like anything else, you can half-ass it with existing equipment or spend a little to do a much better job. I'm talking < $100 worth of gear to set up any old camera with a hotshoe to have a cheap off-camera flash that will do a job that's probably an order of magnitude better. Even if you're not in the business of doing this kind of thing professionally, it's not a whole lot of money to spend; it's very multipurpose gear too so it can help out well beyond the reaches of this niche. Ask David [blogspot.com] for an example or two - it's not specific to this, but there are plenty of examples of how directional lighting can reveal textures on anything.

Re:Quite old news (1)

nneonneo (911150) | about 6 years ago | (#24774319)

My digital camera, a Fujifilm Finepix Z10, actually happens to have this feature (called "Natural + Flash"). I've even used it a couple of times; the intention is to allow you to get a photo under different lighting to see which one is better (both are saved though). I guess this gives my camera a new use :)

Re:Quite old news (-1, Flamebait)

Anonymous Coward | about 6 years ago | (#24768615)

you can go ahead and capture the 3D image of the surface of my arse when I flash you

Re:Quite old news (4, Informative)

glyph42 (315631) | about 6 years ago | (#24768731)

NOT old news. Google for "2008 siggraph papers". Read the paper. Google for "2004 siggraph papers". Read about the old paper. Note the differences. Tim Rowley posts links to the papers from each year, so his site is recommended. Virtually all of these image-processing-related news items can be read long before they reach slashdot simply by keeping up with the latest papers from siggraph. In case you're lazy, the old paper is "Non-photorealistic Camera: Depth Edge Detection and Stylized Rendering Using a Multi-Flash Camera". Oddly, it's offline now. But I do have a copy of it on my hard drive. If you're not lazy, I HIGHLY recommend perusing all of the years' papers listed on Tim's site.

Re:Quite old news (1)

gardyloo (512791) | about 6 years ago | (#24768983)

http://groups.csail.mit.edu/graphics/pubs/siggraph2004_nprcamera.pdf [mit.edu]

      Perhaps the previous slashdot story wasn't "old" -- if you count things post-2004 as "new". However, even the paper in the .pdf notes that people have been concertedly using these techniques since 1998, and I happen to know that a lot of the work was pioneered as early as the mid-1940's with depth-maps and stereograms. The new work IS nice, but it's not totally new.

Re:Quite old news (1)

glyph42 (315631) | about 6 years ago | (#24769041)

Good find with the link.

The new work IS nice, but it's not totally new.

Of course. Not much work is totally new. But it's new enough to be accepted into Siggraph, which is not an easy conference to get into.

Re:Quite old news (1)

badboy_tw2002 (524611) | about 6 years ago | (#24769663)

So things have to be completely revolutionary in order to count as new? There's no such thing as evolutionary development? Your link is appreciated, and helps further the discussion, but why bookend it in such a "haughty" tone implying that the work is a dupe or nothing worth noting? Lots of papers in the same field will seem similar but each can often provide a new valuable insight building on the last one. To imply that nothing is new because someone did something in the 1940's is assinine and arrogant, and discounts the work the current researchers are doing.

Re:Quite old news (1)

gardyloo (512791) | about 6 years ago | (#24769747)

Please. When I write papers, I reference works all the way back to Newton, Galileo, and even before (a nice habit inculcated in me by my former advisors and current boss), and I *know* that much of what I do is not new (or, if it is new, it usually only new in the context of the field in which it's placed).

      What I was apparently being "haughty" about was the breathless way in which advancements are lauded on the front page of slashdot as though they're revolutionary. To not acknowledge the significant corpus of works which have gone _before_ is perhaps not arrogant, but at least misleading and careless.

Re:Quite old news (0)

Anonymous Coward | about 6 years ago | (#24769839)

that's what the slashdot summaries are lacking...exhaustive footnoting. i can't wait. :/

yes, it's one of the above "related links" (2, Interesting)

timothy (36799) | about 6 years ago | (#24769381)

Hi!

I know they're not as conspicuous as they could be, but there are frequently stories included near the body of the new story. It took me a while to dig this one up (I remembered posting it, but that was several thousand posts ago, and a few years, too), so I hope people notice it.

https://science.slashdot.org/article.pl?sid=04/12/01/0238222 [slashdot.org]

Cheers,

timothy

Re:How accurate is accurate? (1)

turkeyfish (950384) | about 6 years ago | (#24773343)

Seriously, how was such accuracy was determined and to what precision can depth "measurements" be made?.

Re:How accurate is accurate? (1)

gardyloo (512791) | about 6 years ago | (#24773539)

My project isn't *extremely* concerned with precision, but for a monochromatic light source and a nice background, one can easily obtain depths to ~1/50 mm from shadow-shifts. This is about one part in 500 of the object height. For two monochromatic sources, the precision increases to about 1/70 mm. More sources increase the precision a bit, but due to specularity and diffraction effects, white light decreases the precision a little bit.

Flash in a camera? (1, Funny)

Skapare (16644) | about 6 years ago | (#24768425)

They make a version of Flash for digital cameras? Is it secure?

Re:Flash in a camera? (2, Funny)

MBCook (132727) | about 6 years ago | (#24768475)

Yes, but for some odd reason it lacks any kind of image capture support.

Re:Flash in a camera? (1)

fbjon (692006) | about 6 years ago | (#24769401)

Well duh, it's for projecting images, not capturing them. It only supports one color and the blink tag, though.

The School of Computer Science and Dolby Canada (1)

techmuse (160085) | about 6 years ago | (#24768493)

This is quite unusual for a university. Many schools have a department of computer science or a school of computer science. But combining that with a school of Dolby Canada is quite unusual. What kind of degrees in Dolby Canada do they offer? :-)

Re:The School of Computer Science and Dolby Canada (2, Funny)

gstoddart (321705) | about 6 years ago | (#24768583)

What kind of degrees in Dolby Canada do they offer?

Primarily "Blinding Yourself with Science", with a minor in "Sound and Signal Processing".

Cheers

Warning: (5, Funny)

Anonymous Coward | about 6 years ago | (#24768537)

TFA requires Flash.

Re:Warning: (1)

argent (18001) | about 6 years ago | (#24768857)

Mod parent up funny?

Re:Warning: (-1, Redundant)

Anonymous Coward | about 6 years ago | (#24770625)

Mod parent down Redundant?

Re:Warning: (0)

Anonymous Coward | about 6 years ago | (#24770997)

Mod parent troll?

A question for mojokid (4, Insightful)

sm62704 (957197) | about 6 years ago | (#24768587)

Why didn't you just link to the more informative New Scientist [newscientist.com] article that the blog you linked quoted?

Re:A question for mojokid (5, Insightful)

discards (1345907) | about 6 years ago | (#24768915)

Because it's his blog and he would like some traffic.

Re:A question for mojokid (2, Interesting)

RyoShin (610051) | about 6 years ago | (#24769379)

Because the NewScientist article doesn't get him the 18 billion ad impressions.

Seriously, look at the page in FireFox with adBlock. Seems... kinda bare, right? It did to me, and I opened it in Opera (where I don't have ad blocking set up) and almost every single blank space had an ad.

These are the kind of sites that require AdBlock.

Re:A question for mojokid (1)

sm62704 (957197) | about 6 years ago | (#24769519)

These are the kind of articles that shouldn't be posted on slashdot's front page. It's not like his was the only submission.

Now where's the download link for the GIMP plugin? (1)

schwaang (667808) | about 6 years ago | (#24768653)

That's really freakin cool. How long before there's a GIMP plugin for this? I'd like it by 3pm Pacific please.

The more things change the more they stay the same (1)

e03179 (578506) | about 6 years ago | (#24768655)

8 years ago a manager in my lab thought that you could use a digital camera to get a 3D mesh model of whatever you photographed. It's a digital camera right? It took months for us to explain what a digital camera really was. Maybe he should have been teaching us!

Don't get too excited (3, Informative)

Rui del-Negro (531098) | about 6 years ago | (#24768717)

This is just a way to automatically generate surface bump maps. It does not really capture depth information (like a Z-buffer).

Conceptually it seems simple enough (take a photo with shadows from a light source not in line with the camera, take another where all the shadows are in line with the camera (making them virtually invisible), tell the software which direction the light is coming from in the first photo, and let it figure out the relative height of each pixel, by analysing the difference between it and the uniform (flash-lit) version, after averaging the brightness of the two. It's similar to the technique some film scanners use to automatically remove scratches.

I can think of a lot of cases where it won't work at all (shiny objects, detached layers, photos with multiple "natural" light sources, photos with long shadows), but still, for stuff like rock or tree bark textures it should save a lot of time. As the video suggests, this should be very pretty useful for archaeologists.

Re:Don't get too excited (2, Informative)

collywally (1223456) | about 6 years ago | (#24769001)

Actually you can use a bump map (which just changes the angle light is reflected without deforming the actual surface) to create a displacement map (which actually moves the polygons up and down). You just have to play a little with the depth to get it right. And when using something like RenderMan which does displacement almost as fast as other renderers do bump maps it doesn't take long to figure out the right depth.

Re:Don't get too excited (2, Informative)

Rui del-Negro (531098) | about 6 years ago | (#24771415)

Well, yes and no. The problem isn't how you use the map (to fake the normals or actually displace vertices), the problem is what kind of maps this technique can create. And my point is that it can't handle (for example) the Z-range of something like a person's face. Anything deep enough to actually cast shadows over other (relevant) parts of the geometry will break it (a shadow will appear much darker and the algorithm will assume it's a suface facing away from the light (or a hole). Use the result as a displacement map and it'll look very weird.

Panasonic (IIRC, possibly JVC or someone else) was working on a video camera that could capture a Z-buffer in real time (meant to be used as a replacement for chroma-keying), but I don't think they ever put a usable product out the door. The techniques used in Radiohead's "House of Cards" video look interesting, too, but also not really usable in most cases.

Anyway, the technique mentioned in this article should still be practical for bas-reliefs and shallow matte surfaces, which is what archaeologists deal with most of the time.

P.S. - Dense geometry (required by displacement maps) isn't particularly slower to render for any high-end shaders (raytracing / photons / GI / QMC / whatever). But those are always painfully slow (compared to basic non-GI, shadow mapped, non-bouncing renderers), and the denser meshes required for good displacement mapping still take up huge amounts of RAM, so bump still has its place.

Outside the box (2, Insightful)

Anonymous Coward | about 6 years ago | (#24768781)

Probably has significant potential in the pr0n industry.

Article has a minor gaffe (1)

jeffmeden (135043) | about 6 years ago | (#24768831)

First an image of a surface is captured with flash. The problem is that the different colors of a surface also reflect light differently, making it difficult to determine if the brightness difference is a function of depth or color. By taking a second photo without flash, however, the accurate colors of all visible portions of the surface can be captured.

This is reversed, the flash-lit image will show you the reflectance (and possibly some depth) information, whereas the non flash-lit image will show you the bare color map for the scene (provided the scene is properly lit to begin with.) FTFY!

Re:Article has a minor gaffe (3, Informative)

sm62704 (957197) | about 6 years ago | (#24769069)

No, with flash (light source coming from the camera) shows the colors without shadows; i.e. without color perspective. Without flash (light source at an angle to the model/subject) shows the deeper parts in shadow (known to us former art students as "color perspective").

You could actually fo this with two flashes, provided one was on the camera and one to the side. The fact that it flashes has nothing to do with it, it has to do with the angle of the light sources.

Re:Article has a minor gaffe (1)

Sj0 (472011) | about 6 years ago | (#24769243)

Sounds like what you'd actually want is one with no depth information from lighting at all, and one with only depth information from flash.

The fully lit one would contain the base colours, the flash one would drop off the brightness as the square of distance.

Of course, as a voxelmap, I'd argue that it's not very useful to 99% of applications...

Re:Article has a minor gaffe (2, Interesting)

jeffmeden (135043) | about 6 years ago | (#24769319)

That's contrary to the article abstract. They describe using the difference between a diffuse lit scene (no shadows) and a flash lit scene (shadows only due to deviation of flash angle) where the brightness delta is used to fudge a distance/reflectivity calculation. Shadow detection is not a part of it, at least in this particular paper.

so.... (1)

Sir_Real (179104) | about 6 years ago | (#24768869)

Much like the printing press, I can only assume this technology will find its first commercial success in pornography. Some angles are worth hiding.

hello.jpg (1)

77Punker (673758) | about 6 years ago | (#24769297)

3d goatse! Awesome!

Also, that would be quite a depth calculation!

Re:so.... (1)

Born2bwire (977760) | about 6 years ago | (#24770061)

Dear Sir,
          I would very much like to see what version of the Gutenberg Bible that you have been reading.

Re:so.... (0)

Anonymous Coward | about 6 years ago | (#24770323)

I would very much like to see what version of the Gutenberg Bible that you have been reading.
The Gutenberg Bible was a commercial success?

Re:so.... (1)

gnick (1211984) | about 6 years ago | (#24770653)

The Gutenberg Bible was a commercial success?

No. But only because of all the damned pirates running off copies without compensating the authors.

Why a flash? (3, Interesting)

phorm (591458) | about 6 years ago | (#24769035)

Why not cameras that use different wavelengths of light, etc? For example, one that works in visible light, and one that works in infrared?

How about the use of different polarized lenses to block certain wavelengths of light?

Re:Why a flash? (2, Informative)

Anonymous Coward | about 6 years ago | (#24769435)

RTFA. Because it is a cheap method. This way you do not need expensive infrared cameras or polarizers or, as mentioned in the article, laser equipement.

And the great thing is, the results are perceived as as good as those obtained from more expensive equipement.

Re:Why a flash? (1)

Josef Meixner (1020161) | about 6 years ago | (#24769713)

And of what use would that be? Reflectance is not very dependent on wavelength for most materials. Only thin materials close to the range of lambda / 2 to lambda / 4 and transparent materials with an optical density different than the surrounding medium at surfaces (refraction and dispersion) show big differences in wavelength. Other materials reflectance is only lightly dependent on wavelengths. The magnetic permeability basically defines how strong it is via the Fresnel equation for dielectric reflection. Metals show the strongest wavelength dependence with different angles, materials like plastic show nearly none.

So how would you use the results of different wavelengths without knowing the material, as you couldn't calculate the angles from the observed image. Besides, what do you think red, green and blue are? Additionally many digital cameras actually capture infrared light, some have IR filter built in (so it won't work) and it depends on the lens (some materials and coating used in lenses absorb IR light). There are some web sites about that subject (Google Search [google.com] )

What escapes me is, how you want to use a polarizer to block wavelengths as it is not very selective. Technically you call devices blocking wavelengths "filters" and you can get quite a lot of colors commercially (and many B&W photographers have a handfull of really colorfull ones). But again I don't see, how that would help to reconstruct 3D information.

Re:Why a flash? (0)

Anonymous Coward | about 6 years ago | (#24771209)

Why not cameras that use different wavelengths of light, etc? For example, one that works in visible light, and one that works in infrared?

How about the use of different polarized lenses to block certain wavelengths of light?

Because this would not give you any information to determine depth from. And a polarizer does not block certain wavelenths of light, it causes attenuation in certain polarizations of light.

Re:Why a flash? (1)

MrBigInThePants (624986) | about 6 years ago | (#24771759)

Because every camera has a flash and what you are suggesting requires specialist equipment costing a lot of money as well as calibration etc?? (possibly custom/purpose made eq. which will increase the cost significantly)

The article makes direct reference to cost. Hell, you could go laser 3D image if money was no object right?

Re:Why a flash? (1)

phorm (591458) | about 6 years ago | (#24772623)

I'm not sure about digital snapshot cameras. But I've seen plenty of security/web/etc cameras that do IR. Filters for a camera may be somewhat affordable as well.

Sounds like gradient maps... (0, Redundant)

blahplusplus (757119) | about 6 years ago | (#24769075)

I noticed this when I was in photoshops if you pick a circular brush and choose white on a black background you can "paint", quasi-3D ish landscapes, because of the way perspective works. And you can turn it into a height map, Supreme commander uses a similar/same method.

It sounds like they just figured out how to use photographic techniques to make a height map.

Hello, what about Victorian-era stereographs? (1)

zooblethorpe (686757) | about 6 years ago | (#24769109)

Can anyone elucidate why this is so whizbang neato when we've had 3D photography ever since someone with a camera figured out about parallax [wikipedia.org] ? Why is this different from stereoscopy [wikipedia.org] ?

Bemused,

Re:Hello, what about Victorian-era stereographs? (4, Interesting)

MBCook (132727) | about 6 years ago | (#24769361)

Parallax and stereoscopy both require the camera to be in two (or ideally with parallax more) positions. The ingenious thing about this idea (watch the video, it's good) is that the camera doesn't need to be moved. By taking two shots in the same spot, one with flash and one without, you can get a good depth map.

Now it's not as good as a laser scanner, but it's much cheaper and faster and smaller (since you could use any little camera). It's a very simple but ingenious idea. I'm quite surprised by the amount of detail they are able to get this way.

Of course it could be argued that parallax and stereoscopy are ways of viewing images with pseudo-depth as opposed to taking them (at least for the purpose of this article). Parallax has no real depth, but helps simulate the effect in the brain. Stereoscopy has no depth, but works just like the eyes to give the brain the data it needs to reconstruct the depth.

Whizbang for lighting & textures, not 3D-ness (1)

zooblethorpe (686757) | about 6 years ago | (#24770307)

Both approaches require taking two photographs, so I confess I don't see too much difference that way. Part of what I'm confused about, I guess, is why it's easier to reconstruct 3D-ness from flash+nonflash rather than from parallax. Per your point, yes, stereoscopy has no depth per se, but then neither does flash+nonflash, really, which appears to be suggested by this bit:

...one aspect that researchers are still working on is how to capture an image that incorporates more than one surface field, such as vines growing up a brick wall. As the technique extracts a height field, it is not possible to "represent the two separate distinct bits of geometry"...

Reading through it again, I think what's important about this approach has much more to do with lighting and surface textures than with 3D:

The two captured images essentially become a reflectance map (albedo) and a depth map (height field)...

..."That information is used to produce a realistic rendering of a surface's texture. By altering the direction of illumination on the virtual surface the system can generate realistic shadow effects."

Cheers,

Re:Whizbang for lighting & textures, not 3D-ne (1)

MBCook (132727) | about 6 years ago | (#24770975)

You're right that this still requires two pictures, but they are taken from the same point of view. You don't have to move the camera, re-focus, etc. To get stereoscopy to look right for human eyes, the cameras need to be just the right distance apart otherwise things look weird or out of scale. I'd imagine you'd have a similar issue with computer processing. To get much depth with parallax I think you need to have the camera shots a good difference apart as well, especially if you are trying to photograph something mostly planar (like Myan carvings on stone temples). This should be able to pick up those finer things easily.

Your bit about the lighting and surface textures, that's the sense I got from the video as well. What they seem to be doing is using the flash to get the correct color of the object. By using that, they can determine how far back on object is set (based on how much darker it is) and that is where the depth comes from (at least at a very basic level).

Still, it's a very neat idea and very approachable. As one of the project people mentions near the end of the video bump maps for games are created by hand. I'd imagine if I could just take two pictures (one with flash, one without) and get some depth information I could play around with that idea very easily on my computer and come up with something neat. Compare that to taking two (or more) shots from different parts, trying to match everything up, etc.

Re:Whizbang for lighting & textures, not 3D-ne (1)

Sj0 (472011) | about 6 years ago | (#24771269)

Here's my take on it:

Parallax and stereoscopy won't give you 3d information. There is no depth field. You've got two 2d images from which 3d information can be speculated, but no 3d information.

This technique sounds like it would give you two arrays: The first array would be a colour map, the second array would be a height map. This would be done basically by taking the image without any flash, which would have no distance cues based on distance from the flash lens, and comparing it to the image with flash, which would be lit in such a way that the brightness will fall off at a square of the distance. Having both pictures, and having physics, will let you create a distance map. Then use the first picture, and apply it as a skin over the distance-map created by the second, and you've got something you didn't have before: a true 3d image, which you should be able to rotate and look at. You won't be able to see behind the object or anything insane like that, but you could concievably take two pictures of someone's face, and get a 3d snapshot of the face which would require only small changes to look normal.

Re:Whizbang for lighting & textures, not 3D-ne (1)

Anachragnome (1008495) | about 6 years ago | (#24772579)

"You won't be able to see behind the object or anything insane like that, but you could concievably take two pictures of someone's face, and get a 3d snapshot of the face which would require only small changes to look normal."

I seem to recall a short story somewhere(can't remember where, or by who) where the protagonist was working with the same kind of technology but found that he COULD see the back of objects. If I remember correctly, he could see the back of objects, but when he went and actually looked there himself, at the back of the objects he photgraphed, what he observed was entirely different then what his camera recorded. Essentially, he had stumbled onto a sort of alternate reality.

Fiction, obviously, but it made for an interesting read.

Re:Hello, what about Victorian-era stereographs? (1)

qdaku (729578) | about 6 years ago | (#24770327)

I also fail to see the whizbang neato section. Also.. limited by the effect of the flash. Modern photogrammetrical methods are, quite frankly, astounding. I was recently at a photogrammetry and LIDAR conference for imaging pit slopes and it was fascinating. Some software relied on using a simple digital camera and a decent lens (which was calibrated to account for radial distortion) and could return shockingly good results with good accuracy. In these cases, the distance to the object (the pit face) being shot can be significant and a flash would not have had a hope in hell of finding this.

Mind you, this stuff is being used for picking our orientations of joints/faults/bedding for generation of stereonets. Accuracy is getting down there though and is beginning to be used to see the total slope deformation over time.

How well does this work with faces? (1, Interesting)

Anonymous Coward | about 6 years ago | (#24769159)

I wonder how well this works with faces, if it works well it could be an easy way to create head busts for 3d heads for "icons" in your contact list.

Re:How well does this work with faces? (1)

Kaetemi (928767) | about 6 years ago | (#24769931)

Eyetronics (http://www.eyetronics.com/) has a similar technology (they flash different patterns, and compile the 3d image from that). It's the company that does the 3d scans for a lot of movies and games these days. I saw a live demonstration of that once, where people could just go sit in a picture booth, and have their face photographed to a 3d file. It works really fast, and the result is ok (as long as the system is synchronized up correctly).

They also did a presentation where they did say that they were planning to roll out actual picture booths in popular locations, for allowing people to upload a 3d image of themselves on their online profiles.

The Small Print (1, Funny)

Anonymous Coward | about 6 years ago | (#24769259)

Caution: Do not use camera, flash or not, around minors, some asians, some tribes of africa and south america, or anyone in the protection of the united states federal government. Use of camera in any of these situations can result in physical harm or jail time.

Not true 3D (1)

nurb432 (527695) | about 6 years ago | (#24770317)

But still could be good for quick and dirty bumpmaps.

3D geometry (1)

Bones3D_mac (324952) | about 6 years ago | (#24770461)

This actually isn't all that different from some methods I've seen to generate 3D geometry of a subject using cameras and lighting. One method in particular uses cameras mounted in strategic locations around the subject as a DLP projector rapidly displays a series of light and dark lines patterns across the subject's surface, then shooting photos of lines.

Not quite as cool as a 3D scanner using lasers, but it seems to be easier on subjects like humans or animals that tend to move a lot.

Why go to all that trouble?? (0)

Anonymous Coward | about 6 years ago | (#24771031)

All you need to do is rig your cellphone to emit a high frequency pulse and then post-process the sonar to get a 3D map of the environment. I saw Morgan Freeman do it...

A couple of points. (0)

Anachragnome (1008495) | about 6 years ago | (#24772449)

One, I wonder what the results would be if this was implemented with a standard emulsion film camera and a double-exposure of the same film. One exposure being with a flash, the other without. I no longer own a emulsion film camera, so I cannot test it and evaluate the results.

Two, this also might explain something odd I experienced in the desert of California, at a ghost town called Rhyolite. My wife and I were approaching the town late at night and we noticed bright flashes coming from the direction of the ghost town. Very bright. As we approached closer, we saw that someone was aiming what appeared to be a modified, hand-held aircraft landing light(with a momentary trigger) at the old bank building. They had a single camera set up and then proceeded to light the outside of the building, from different angles, repeatedly. They did this for quite some time. I am not sure if they had a single, running exposure going, or multiple exposures. I am not sure what their goal was, but this might be an explanation. They could quite possible have been trying to achieve a 3D effect with emulsions film (this was 20+ years ago, so I doubt they were doing digital photography).

Just a couple thoughts.

FujiFilm FinePix S700 (0)

Anonymous Coward | about 6 years ago | (#24772767)

My camera already has a feature for this. It has a mode that takes to consecutive pictures (one with flash and one without). All I need now is a little software and I have a 3D camera. :-)

Load More Comments
Slashdot Login

Need an Account?

Forgot your password?

Submission Text Formatting Tips

We support a small subset of HTML, namely these tags:

  • b
  • i
  • p
  • br
  • a
  • ol
  • ul
  • li
  • dl
  • dt
  • dd
  • em
  • strong
  • tt
  • blockquote
  • div
  • quote
  • ecode

"ecode" can be used for code snippets, for example:

<ecode>    while(1) { do_something(); } </ecode>