Beta
×

Welcome to the Slashdot Beta site -- learn more here. Use the link in the footer or click here to return to the Classic version of Slashdot.

Thank you!

Before you choose to head back to the Classic look of the site, we'd appreciate it if you share your thoughts on the Beta; your feedback is what drives our ongoing development.

Beta is different and we value you taking the time to try it out. Please take a look at the changes we've made in Beta and  learn more about it. Thanks for reading, and for making the site better!

2D To 3D Object Manipulation Software Lends Depth to Photographs

timothy posted about 3 months ago | from the artist's-conception dept.

Graphics 76

Iddo Genuth (903542) writes "A group of students from Carnegie Mellon University and the University of California, Berkeley have developed free software which uses regular 2D images and combines them with free 3D models of objects to create unbelievable video results. The group of four students created the software (currently for Mac OS X only) that allows users to perform 3D manipulations, such as rotations, translations, scaling, deformation, and 3D copy-paste, to objects in photographs. However unlike many 3D object manipulation software, the team's approach seamlessly reveals hidden parts of objects in photographs, and produces plausible shadows and shading."

Sorry! There are no comments related to the filter you selected.

Win32 and Porn (0)

Anonymous Coward | about 3 months ago | (#47622975)

The only way this will really explode.

Re:Win32 and Porn (1)

rubycodez (864176) | about 3 months ago | (#47623013)

or if there were linux-only porn 3D rendering engine, it would surely bring the Year of the Linux Desktop to pass

Re:Win32 and Porn (0)

Anonymous Coward | about 3 months ago | (#47623051)

So I took photos of your mom and created porn with them. You say she's not a whore. Well, now she is! Ta da!!!

Bet you didn't think she could bend that way huh? The power of the Internetz!!!

Re:Win32 and Porn (1)

rubycodez (864176) | about 3 months ago | (#47623425)

You misuse the language, "whores" by definition charge money

Re: Win32 and Porn (0)

Anonymous Coward | about 3 months ago | (#47623739)

Touché!

Re: Win32 and Porn (0)

Anonymous Coward | about 3 months ago | (#47624551)

Touch What?

Re: Win32 and Porn (1)

rubycodez (864176) | about 2 months ago | (#47642851)

looks like "touchA (copyrighted) in my browser". If you can touch her A for free that's not whoring

Re:Win32 and Porn (-1)

Anonymous Coward | about 3 months ago | (#47623181)

I exploded in your mom's snatch. 9 months later she shat you out.

Shouldd have just gotten head that night...

Re:Win32 and Porn (-1)

Anonymous Coward | about 3 months ago | (#47623351)

Man, the NSA astroturfers must have gotten really cranky after that Snowden article.

Re:Win32 and Porn (0)

Anonymous Coward | about 3 months ago | (#47624711)

The NSA is really mad at Obama, because Obama cuckolded the NSA, and then the NSA is called "racist".

Carnegie Melloned (3, Funny)

AaronLS (1804210) | about 3 months ago | (#47623099)

No longer is it Photoshopped, but instead we say it's been Carnegie Melloned.

Re:Carnegie Melloned (-1)

Anonymous Coward | about 3 months ago | (#47623117)

Like your mom!!!

Re:Carnegie Melloned (1)

FatdogHaiku (978357) | about 3 months ago | (#47624565)

So, she would be a Carnegie Mellon Baller?

Free software? (-1)

Anonymous Coward | about 3 months ago | (#47623105)

Bunch of Communists...

am i missing something? (1)

Anonymous Coward | about 3 months ago | (#47623129)

isn't this just texture mapping onto a 3d model?

Re:am i missing something? (1)

Russ1642 (1087959) | about 3 months ago | (#47623141)

The same way that Avatar was just computer animation, like Toy Story.

Re:am i missing something? (1)

Anonymous Coward | about 3 months ago | (#47623749)

not really. this is quite a simple process: pick a model, apply texture to it, manipulate at will. More impressive is code that generates the model using the image itself.

Re:am i missing something? (-1, Troll)

Anonymous Coward | about 3 months ago | (#47623195)

You probably miss a lot of things, because you are a mental retard

Re:am i missing something? (-1)

Anonymous Coward | about 3 months ago | (#47623465)

This person speaks the truth. Mod up, bitches!

Re:am i missing something? (0)

Anonymous Coward | about 3 months ago | (#47623595)

Texture mapping is just the final step in the process. Read the full paper [cmu.edu] for more info.

Re:am i missing something? (0)

Anonymous Coward | about 3 months ago | (#47626357)

Yes. It's the same thing that you've been able to do with SketchUp for years.

Photoscan [youtube.com] is a lot more interesting and practical for creating models out of real objects.

I'm impressed (2)

oodaloop (1229816) | about 3 months ago | (#47623193)

I don't do anything like this for a living, but I must say I'm impressed. I'm fairly certain someone will say this was done back in 1997 though so it's nothing new.

Re:I'm impressed (-1)

Anonymous Coward | about 3 months ago | (#47623239)

I do this for a living, but I must say I'm impressed. I'm fairly certain this was done back in 1997 though so it's nothing new.

Citation?

Re:I'm impressed (0)

Anonymous Coward | about 3 months ago | (#47623401)

You changed his message.

Re:I'm impressed (-1)

Anonymous Coward | about 3 months ago | (#47623481)

Blow it out your mom's snatch.

Re:I'm impressed (0)

Anonymous Coward | about 3 months ago | (#47627387)

No. He added texture.

Re:I'm impressed (0)

Anonymous Coward | about 3 months ago | (#47623265)

I'm fairly certain someone will say this was done back in 1997 though so it's nothing new.

Simpsons did it.

Re:I'm impressed (1)

Anonymous Coward | about 3 months ago | (#47627945)

More or less, it's an evolution of previous work.

Among other previous work:
Rendering Synthetic Objects into Legacy Photographs (2011)
http://www.youtube.com/watch?v=hmzPWK6FVLo

3-Sweep: Extracting Editable Objects from a Single Photo, SIGGRAPH ASIA 2013
http://www.youtube.com/watch?v=Oie1ZXWceqM

Ugh (1)

jabberw0k (62554) | about 3 months ago | (#47623227)

It's "a software" again. I'd like to give the author an information, perhaps while we eat a spaghetti, that in English we have "mass nouns" and thus you cannot have one hardware or one clothing or one software.

Re:Ugh (0)

Anonymous Coward | about 3 months ago | (#47623281)

It's "a software" again. I'd like to give the author an information, perhaps while we eat a spaghetti, that in English we have "mass nouns" and thus you cannot have one hardware or one clothing or one software.

R U n En. PROF or sum thing?

Re:Ugh (1)

wbr1 (2538558) | about 3 months ago | (#47623321)

Maybe the author smoked one whole marijuana.

Re:Ugh (1)

PotatoHead (12771) | about 3 months ago | (#47624067)

To make the high for their joy to come out.

Re:Ugh (0)

Anonymous Coward | about 3 months ago | (#47623561)

>I'd like to give the author an information

You'll be happy to know that you can do so while legitimately following the rules of English, so long as you do so in a court of law. Information is *mostly* a mass noun, but has exceptions.

Now, number 6, I'll need some information. In Formation!

Re:Ugh (0)

Anonymous Coward | about 3 months ago | (#47626075)

Who are you?

Re:Ugh (1)

Em Adespoton (792954) | about 3 months ago | (#47623807)

Actually, if you follow strict English rules, it should be "a software" or "softwares" -- the fact that we've nounized "software" doesn't make it right. Kind of like math vs maths -- maths is correct, but US English chooses math instead, as the abbreviation has been nounized.

Re:Ugh (0)

Anonymous Coward | about 3 months ago | (#47624431)

I find it hilarious that someone being so pedantic about English usage would use the hilariously awkward neologism "nounize."

Re:Ugh (2)

Zero__Kelvin (151819) | about 3 months ago | (#47625919)

Yes. I bought a bunch of hardwares the other day! Either that, or you are one idiot.

Ahh! Making of the understanding for peoples... (2)

PotatoHead (12771) | about 3 months ago | (#47624053)

...informations to better builds the good!

Bad informations with for the good people so making of the understanding isn't!

Should images even be admissible in court anymore? (4, Interesting)

roman_mir (125474) | about 3 months ago | (#47623305)

How can images be admissible in court in our modern technological age of 3d manipulation of 2d images? Sure, they still have visual artifacts (like in the video presentation for this technology, when the airplanes are turned into 3d, their propellers are not changed, the same image of a propeller is kept for 3d model as was on the original 2d picture) but eventually all of these will go away, it may become impossible to detect that an image in front of you was manipulated at all.

Eventually this will also apply to video footage.

Add the digital augmentation of reality into the mix (Google Glass, etc.) and you can't rely even on the recorded information. We know that people are not good at remembering the details of what they saw, but if cannot be sure of images and video (and obviously audio) either, then this type of data becomes useless in courts. That's an interesting development in itself, never mind the fact that you can now turn a picture into a movie if you want.

Re:Should images even be admissible in court anymo (0)

Anonymous Coward | about 3 months ago | (#47623397)

IANAL so I don't know how admissible photos are today but I do know that some cameras support adding a digital signature to an image. If the image is altered later then the signature will be invalidated. This feature has been around for quite a while and I believe journalists are the ones who use it.

Re:Should images even be admissible in court anymo (0)

Anonymous Coward | about 3 months ago | (#47624475)

And....

It has [elcomsoft.com] been cracked [elcomsoft.com] .

Re:Should images even be admissible in court anymo (2)

Russ1642 (1087959) | about 3 months ago | (#47623411)

Pictures and video are used in court but someone testifies that it hasn't been modified. If the defense argues that it has been modified then a jury weighs the merits of that claim.

Re:Should images even be admissible in court anymo (2)

sjames (1099) | about 3 months ago | (#47624709)

The problem is that as technique improves, the theory that the photo/video was altered in a way that can't be detected becomes ever more plausible.

It was easy to take the witness's word for it when the alternative would involve millions in equipment and would likely be trivial to detect.

Re:Should images even be admissible in court anymo (2)

Alsee (515537) | about 3 months ago | (#47626353)

a jury weighs the merits of that claim

Unfortunately, I wouldn't trust the average juror to weigh a head of lettuce.

-

Re:Should images even be admissible in court anymo (0)

Anonymous Coward | about 3 months ago | (#47623641)

Pretty much this.

A few decades down the road with software like this improving, it will be stupidly hard even for experts to see modifications.
Throw in some basic genetic algorithms and a large cache of images and models, as well as some rules for thousands of visual scenes, and a few million evolutions later, it will be capable of modifying scenes so realistically to the point where an expert would never be able to ID any changes to it.
Genetic algorithms are great for things like this. The more exposure to scenes niche and common, the better.

Still, there will always be the "2nd image" problem. Things like CCTV, if static, will be harder to fake unless a person has access to the original captures.
It is easy to modify a unique picture so it doesn't look changed. If NOBODY knows what is behind that chair, you could put literally anything, as long as it fits the repeating textures at the edges and what you expect to be there. Be it a dead body or a huge dragon dildo.
But if someone hates you enough, that might not be a problem.

Meh. Big Deal (0)

Anonymous Coward | about 3 months ago | (#47623307)

The use existing models to 3D map photographic texture data onto them, remove the 2D object from the photo, and then insert the texture-mapped model into the photograph.

Basically some light photogrammetry, some content-aware filling and some of that sweet, sweet Photoshop CS 3D insertion stuff.

This is NOT a big deal outside the automation aspect.

Still, I expect Adobe to buy it up or do something similar eventually, perhaps even opening a market for 3D objects.

P.S. I'm sure some of you nerds will say I'm wrong, but that's what I saw. No magic, nothing cutting edge. No stupendous breakthrough. About as interesting as SnapChat AFAIC.

A question on this (4, Interesting)

DigitAl56K (805623) | about 3 months ago | (#47623457)

While those results look impressive, in some of the demos where objects are seamlessly moved around, how are they filling in the original background (or what looks like it)? The video largely explains how the model is textured, lit, environment mapped, rendered with shadow projection with calculated perspective and depth of field, but I didn't hear much about re-filling the background. I assume they're cloning or intelligently filling texture ala photoshop, or perhaps in all cases where they showed something being animated it was a new clone of an existing object into a new area of the photo?

Re:A question on this (4, Interesting)

Bryan Ischo (893) | about 3 months ago | (#47623617)

I agree there was some trickery there. Since they did not address this at all, I am assuming that the answer is simply that they had to manually paint in the parts of the photos that were revealed when other parts were removed. Having to point that out in the video would take away from the apparent magic which is probably why they didn't mention it (and that's somewhat disingenous if you ask me). It's possible that they provide some tool that attempts to automatically fill in the background, and if so it would appear that it was used in some of the examples (such as when the apple or whatever it was was moved in the painting, the area that was revealed looked more like the cloudy background than it did like the table that the apple was on), but there's no way that they automatically compute the background for anything that is not on top of a pattern or more or less flatly shaded surface. I also noticed that in some examples, they were merely adding new objects to the scene (such as the NYC taxi cab example), and although they started with a scene that looked like the cab was already there is moved it to reveal painted chevrons underneath, it's likely that those chevrons were already in the photo and didn't need to be recreated.

In short: they glossed over that detail and used examples that didn't require explaining it, but it'c certainly an issue that a real user would have to address and doesn't happen as "magically" as it would appear from the video.

BTW, CMU alum here. Went back to campus for the first time in nearly 20 years earlier this year. My how things have changed. I suppose every college is the same way now, but holy crap it's so much more cushy than it used to be! Guess all that cush keeps the computer science juices flowing ...

Re:A question on this (1)

Solandri (704621) | about 3 months ago | (#47624599)

I am assuming that the answer is simply that they had to manually paint in the parts of the photos that were revealed when other parts were removed.

If you've used a recent version of Photoshop, their content-aware fill often does an amazing job at automatically filling in hidden backgrounds [youtube.com] .

Re:A question on this (1)

roman_mir (125474) | about 3 months ago | (#47626117)

Removing objects from images and filling in the missing space with some other content from the rest of the image based on 'awareness' has been available for some time now, it is called 'content awareness [photoshopessentials.com] ' in Photoshop and 'Resynth [patdavid.net] ' in Gimp.

Re:A question on this (1)

spacepimp (664856) | about 3 months ago | (#47623867)

i'm downloading the open source software now to test it out. I assume it is very similar to content aware fill used in Photoshop.

Re:A question on this (0)

Anonymous Coward | about 3 months ago | (#47624105)

The longer vid from their home website says they use the data from the 3d model to fill in. They admit that it can look cheesy but goes mostly unnoticed in partial rotations due to software filling of textures.

Re:A question on this (0)

Anonymous Coward | about 3 months ago | (#47624301)

They use data from the 3D model to fill in the parts of the lifted object that are obscured in the original photo. That has nothing to do with filling in the background - i.e. the "hole" that would normally be left by moving the lifted object away from the space it originally occupied in the 2D photo.

Re:A question on this (0)

Anonymous Coward | about 3 months ago | (#47624667)

It is mentioned in the paper: they use an older algorithm called "PatchMatch".
http://gfx.cs.princeton.edu/pubs/Barnes_2009_PAR/index.php

Re:A question on this (1)

DigitAl56K (805623) | about 3 months ago | (#47624847)

Thanks, AC!

For anyone else interested, I found this video on PatchMatch:
http://www.youtube.com/watch?v... [youtube.com]

Re:A question on this (0)

Anonymous Coward | about 3 months ago | (#47627523)

Yeah, where the summary says "seamlessly reveals", I kinda mentally substituted "made up".

Is that wrong?

Not free as in freedom (4, Informative)

HyperQuantum (1032422) | about 3 months ago | (#47623531)

The software appears to be proprietary, not free as in 'free software'. It is available for zero cost, but usage is restricted:

ACADEMIC OR NON-PROFIT ORGANIZATION NONCOMMERCIAL RESEARCH USE ONLY

Re:Not free as in freedom (0)

Anonymous Coward | about 3 months ago | (#47624213)

Though the source code has supposedly been released under GPLv2, according to their website. Confusing.

Re:Not free as in freedom (1)

nawcom (941663) | about 3 months ago | (#47625457)

Though the source code has supposedly been released under GPLv2, according to their website. Confusing.

http://www.cs.cmu.edu/~om3d/co... [cmu.edu]

Re:Not free as in freedom (1)

azav (469988) | about 3 months ago | (#47625397)

And it crashes really really easily.

Re:Not free as in freedom (1)

nawcom (941663) | about 3 months ago | (#47625445)

Yeah, not running on Yosemite for sure. Here's the GPL2 source code: http://www.cs.cmu.edu/~om3d/co... [cmu.edu]

Cool, but... detection? (1)

gurps_npc (621217) | about 3 months ago | (#47623575)

I was very impressed with the effects.

I would love to know how easy such manipulation is to detect? Is it harder or easier to detect than photo-shop?

At some point, photo-shop type effects will become undetectable.

One trick used to make this seem more impressive (0)

Anonymous Coward | about 3 months ago | (#47623635)

The producers of the video used a trick to make their software a little more impressive than it really is.

The item which is to be converted to a 3D model covers part of the image; for instance, the chair covers a part of the carpet, and the origami bird covers the tips of the fingers. However, in the edited form, those covered areas magically appear.
Clearly, this requires two carefully-taken pictures; one with the object, one without. And of course you must be careful that the presence of the item is the ONLY difference between the two, or the effect will not be nearly as impressive.

This is a relatively minor complaint in the face of the impressive combination of technologies on display here, but still, the video would be significantly less impressive if they paused during the origami demo to say, "Now we have to carefully take another, identical picture. A tripod is required for the use of our software to maintain original camera position, and extreme care must be taken not to disturb the scene while removing the object that is to be manipulated."

Of course, there's a chance that they're drawing that hidden space in some other way, but that'd surprise me since they don't even mention it in the video. I guess they could use the same method they use with the 3D objects; interpolate from nearby texture or from a model's stock texture. But that wouldn't work, since the shape of the fingertips is hidden in the origami scene - no way to automatically generate that.

Re:One trick used to make this seem more impressiv (1)

Arkh89 (2870391) | about 3 months ago | (#47623711)

No, it seems they are using inpainting :

We compute a mask for the object pixels, and use this mask to inpaint the background using the PatchMatch algorithm [Barnes et al. 2009]. For complex backgrounds, the user may touch up the background image after inpainting.

Thus, only one image is required.

Re:One trick used to make this seem more impressiv (1)

spacepimp (664856) | about 3 months ago | (#47623925)

Why couldn't the algorithm be content aware? similar to content aware fill in photoshop? that is much more likely given the scope of the software.

popular topic at SIGGRAPH for last decade (1)

peter303 (12292) | about 3 months ago | (#47623773)

The algorithms and software get better and better.
SIGGRAPH next week in Vancouver.

looks fantastic (1)

Anonymous Coward | about 3 months ago | (#47623839)

watching or attending siggraph is like watching an Ubisoft conference.

Everything looks amazing on stage, but when you get your hands on it is another story altogether.

I'll believe this works when I use it. Until then, I might as well go watch the lawnmower man and consider it a documentary.

What sorcery is this?! (1)

Alejux (2800513) | about 3 months ago | (#47624187)

I ask you!

Unbelievable (2)

cream wobbly (1102689) | about 3 months ago | (#47624473)

Wouldn't it be better if the results were believable?

Sigh (0)

Anonymous Coward | about 3 months ago | (#47624531)

This site used to be quite good.

Then YouTube put the banhammer on their trolls, and this place went to hell.

"to create unbelievable video results" (1)

Ecuador (740021) | about 3 months ago | (#47625055)

I thought the point was to create believable video results. Bad, fake-looking 3D out of 2D sources has been done to death, mostly for the cinema...

Not as impressive as the video makes it look (1)

Vyse of Arcadia (1220278) | about 3 months ago | (#47625327)

I've done a little bit of work in a related area, so I skimmed the paper (at the bottom of the first link,) and it's nowhere near as impressive and automagical as the video makes it seem. The user has to provide a mask distinguishing the object they are manipulating from the rest of the image, and then the user also has to provide the 3D model for the object! The model is then smoothed to better fit the original using the mask and the inferred illumination, textured using the image, and then popped out to be manipulated in 3D. Not to detract from how cool this all is, but the user is still doing a lot of the heavy lifting.

I bet a combination of the techniques in this paper and the techniques of multiple view geometry (which is where I've actually done a bit of work) would be considerably more impressive and automagical.

Well now. (1)

azav (469988) | about 3 months ago | (#47625387)

That's pretty crashy software. At least it builds. You'd think they would distribute something a little more solid.

The future... (2)

Grim Beefer (946632) | about 3 months ago | (#47625619)

This looks pretty cool, but I have a lot of questions.

On it's surface, it looks like a lot of the results they're getting wouldn't currently be outside of the realm of student level work, such as the simple practice of projecting and baking textures into materials from photographs, the innovation seems to be that they're quickly automating a lot of that stuff into a UI with a fast lighting solution. One of the things I find most rewarding about 3d is that you sometimes get this huge burst of increased productivity, as long as you're not too bummed out about things you've spent time and energy learning how to do becoming obsolete. This isn't that different, fundamentally, than setting your viewport background in Maya, 3ds Max, etc. to be a photograph after properly matting your foreground objects and projecting textures with adjusted reflectivity, just without all of the manual tediousness. Also, there's also been other, similar work done on the subject [vimeo.com] , that I've heard of, but this still looks pretty neat if it's something you can use right now without a billion dollar computer.

One of the big things this tech might be doing is streamlining the process of match lighting. I personally can't wait till the major software packages have integrated solutions for easy lighting from photo sources. Currently the setup for photo matting is a pain, it requires stitching together panoramic photos of reflective chrome spheres - on location - or carefully using observation skills to recreate the lighting by hand (which can be very difficult for glossy surfaces). It would appear, however, that we're on the brink of not needing those things anymore. That being said, this software still has a bit to go, however.

For example, the lighting information baked into the diffuse textures of the objects, in these examples, does not appear to be dynamic - if you watch the taxi-spinning segment you'll notice that the specular highlights on the hood of the car do not properly update as the orientation of the model changes in relationship to the light sources, making the taxi appear to have white paint streaks once rotated out of alignment with the light source. The car falling off the cliff example is probably the most apparent in final results, as the strong baked lighting makes the coloring look off. The way we 3d artists get around this problem is to eliminate the lighting information in our diffuse textures as much as possible before reapplying them as flat color, and then let our lighting rigs take care of the reflections, shadows, and such. As they mention this software doesn't support transparency, and I would guess is rendering everything as matte objects, meaning the renderer probably isn't robust enough to handle anything coming close to complicated reflections/refractions and so on, making this software's usefulness very situational, currently. It would be a great way to quickly populate photos with hordes of smaller objects, for example. However, with a more powerful renderer, feature wise, this tech could be really useful for the Photoshop crowd. I wish Autodesk/Mental Ray would focus on stuff like this instead of the boring crap updates we usually get (Maya's new fluids are pretty cool though, tbh...).
Check for New Comments
Slashdot Login

Need an Account?

Forgot your password?