Beta
×

Welcome to the Slashdot Beta site -- learn more here. Use the link in the footer or click here to return to the Classic version of Slashdot.

Thank you!

Before you choose to head back to the Classic look of the site, we'd appreciate it if you share your thoughts on the Beta; your feedback is what drives our ongoing development.

Beta is different and we value you taking the time to try it out. Please take a look at the changes we've made in Beta and  learn more about it. Thanks for reading, and for making the site better!

Microsoft Tech Can Deblur Images Automatically

timothy posted more than 4 years ago | from the pleasantly-awesome dept.

Input Devices 204

An anonymous reader writes "At the annual SIGGRAPH show, Microsoft Research showed new technology that can remove the blur from images on your camera or phone using on-board sensors — the same sensors currently added to the iPhone 4. No more blurry low light photos!"

cancel ×

204 comments

Sorry! There are no comments related to the filter you selected.

Enhance (5, Funny)

TheSwampDweller (1076321) | more than 4 years ago | (#33097782)

Enhance!

Re:Enhance (5, Funny)

bl4nk (607569) | more than 4 years ago | (#33097800)

You forgot to include the link to the YouTube video: http://www.youtube.com/watch?v=Vxq9yj2pVWk [youtube.com]

Re:Enhance (5, Funny)

g2devi (898503) | more than 4 years ago | (#33098084)

Actually, that's a Alpha version. The production version is demonstrated here:

http://www.youtube.com/watch?v=KUFkb0d1kbU [youtube.com]

Re:Enhance (2, Funny)

Mirey (1324435) | more than 4 years ago | (#33097818)

Just print the god damn picture! (from Super Troopers)

Ugh (1)

GrumblyStuff (870046) | more than 4 years ago | (#33097916)

That's the first thing that came to mind. Luckily, it's a hardware attachment so we can still tell people to fuck off when they come to us with blurry photos.

Unless they have the attachment.

Re:Enhance (1)

CharlyFoxtrot (1607527) | more than 4 years ago | (#33098284)

Enhance!

There's an app for that !

lol yea sure (-1, Flamebait)

Anonymous Coward | more than 4 years ago | (#33097786)

Probably only half-working coming from microsoft, plus if you use black light in the room you can get brick you're phone/camera.

Re:lol yea sure (5, Informative)

Helios1182 (629010) | more than 4 years ago | (#33097814)

Microsoft Research puts out a lot of really interesting and successful research. They aren't the people programming the OS or office applications.

Re:lol yea sure (-1, Flamebait)

Anonymous Coward | more than 4 years ago | (#33098040)

Its true. These demos haven't been through the marketing, bloating or bug insertion departments yet.

Even so... (4, Interesting)

fyngyrz (762201) | more than 4 years ago | (#33098572)

Clearly (pun intended) the results have a ways to go yet. Look at the coca-cola image, at the 'a' on the end of the cola... that thing is hosed by the blur, and they're unable to recover it because there's no intermediate contrasting color. Same thing for the spokes on the car rims.

This problem can't be completely solved post-picture. Only large-scale elements with nothing else around them will yield pixel-sharp solutions.

The optimum way to correct blur is to apply active or passive (e.g. tripod) stabilization to the lens prior to the shot; active technology is already pretty decent (photographers tend to measure things in stops; it's intuitive to them... when they say an active stabilizer "gives you" four stops, for instance with Canon, what they mean is that you can shoot four stops slower with the shutter and you won't get blur from camera movement.) Doesn't solve subject movement at all, but then, nothing really does other than cranking down the exposure time.

So... considering lens stabilization has been in-camera for years, and this requires more hardware, but gives you less... I'm going to go out on a limb and say it isn't of interest to camera folks. Maybe in some esoteric role... a spacecraft or something else with a tight power budget where stabilization can't be done for some reason (certainly measurement takes less power than actual stabilization)... but DSLRs and point-and-shoots... no.

Re:lol yea sure (-1, Troll)

Threni (635302) | more than 4 years ago | (#33098302)

> They aren't the people programming the OS or office applications.

No, those guys are busy running mac and linux software looking for stuff to rip off... I mean, sitting around coming up with amazing innovations.

Re:lol yea sure (1)

RobertM1968 (951074) | more than 4 years ago | (#33098806)

Microsoft Research puts out a lot of really interesting and successful research. They aren't the people programming the OS or office applications.

Yes, but I just took two of the images, for research purposes, and applied a simple Sharp mask to them (two different levels), and it seems the results are pretty comparable. If I actually spent more than 2 mouse clicks to try to properly sharpen them, I betcha the results would be even better, and not require additional hardware. As a matter of fact, the results they get can easily be duplicated with IN CAMERA filters and thus save a boatload of dev costs, and a bunch of money.

These (SHARP 1 and SHARP 2) were done using PMView from the blurry image. Filters -> Sharpen (mild) and Sharpen (moderate). Adding "Edge Enhance" to it makes the car one look even better. Now, these are very very very very basic filters that have been surpassed ages ago by filters easily runnable on a camera or cell phone.

So, I find nothing interesting or successful about this. I find this will be something that makes cameras and cell phones cost more money, while not providing any benefits that a simple filter or two in the cam/phone can accomplish.

deBlur and Sharpen [robertmauro.com]

Sorry about the scrolling... but just something quick I threw together in 2 seconds which does the job). MS, if you want the research comparison taken down, email me at first name dot last name at google dot com.

Re:lol yea sure (4, Insightful)

nine-times (778537) | more than 4 years ago | (#33098810)

Yeah, Microsoft does some decent research and develops some interesting technologies. It's turning things into products that they seem to have trouble with.

Re:lol yea sure (5, Funny)

nacturation (646836) | more than 4 years ago | (#33097846)

Probably only half-working coming from microsoft

It could be worse... the GIMP developers could have built it, in which case it would be a mostly working implementation of half the features of some existing software. However, nobody would realize this since only the developers would be able to comprehend the UI.

Re:lol yea sure (3, Insightful)

binarylarry (1338699) | more than 4 years ago | (#33097906)

Have you used GIMP in past 5 years?

Re:lol yea sure (2, Informative)

bill_mcgonigle (4333) | more than 4 years ago | (#33098000)

Single-window mode hasn't been released yet, but it's coming. This will make it usable for folks who aren't using fvwm with focus-follows-cursor.
 

Re:lol yea sure (0, Troll)

h4rr4r (612664) | more than 4 years ago | (#33098072)

Those people should use a better setup. The lack of auto mouse focus default really makes windows desktop suck, plus the lack of workspaces.

Re:lol yea sure (2, Insightful)

bill_mcgonigle (4333) | more than 4 years ago | (#33098128)

Those people should use a better setup.

Surprisingly enough, different people have different needs.

The lack of auto mouse focus default really makes windows desktop suck, plus the lack of workspaces.

I'm too much of a spaz to use focus-follows-mouse. Every time I try it I wind up bumping the mouse and typing into the wrong window. If I were a hardcode pre-trunk GIMP user I'd definitely have a session set up that way, though. Fortunately, the GIMP developers have come around to an option that works with most peoples' desktops.

Re:lol yea sure (0)

Anonymous Coward | more than 4 years ago | (#33098584)

I'm not exactly sure what GP is using focus-follows-mouse for, but a window manager can sometimes do focus-follows-mouse-for-mouse and focus-follows-mouse-for-keyboard (those aren't standard names). I use the former to scroll one window while I type into another.

Re:lol yea sure (1)

zippthorne (748122) | more than 4 years ago | (#33098648)

That's because that's not how "focus follows mouse" should work. You don't move *all* the ui elements over to what the mouse is over. You move the *mouse* ui events over there, and maybe some others, depending on context. Certainly not typing, though.

The biggest, most useful one, IMO, is scroll. I can't tell you how many times I've wanted to scroll a window to view some stuff, but not have that window cover up the one for the app I'm actually working in.

It's doubly frustrating, because when I get home, my mac does exactly what I expect it to...

Re:lol yea sure (1)

EvanED (569694) | more than 4 years ago | (#33098310)

The lack of auto mouse focus default really makes windows desktop suck, plus the lack of workspaces.

Says you. When I'm working in a text editor or whatever, I like to move the mouse entirely out of that window's area so the cursor isn't distracting.

Funnily enough, different people have different preferences. Who'd have thunk it?

Re:lol yea sure (1)

LinuxIsGarbage (1658307) | more than 4 years ago | (#33098796)

Shouldn't you be using a full screen emacs or vim text editor anyways?

Re:lol yea sure (1)

dbIII (701233) | more than 4 years ago | (#33098762)

MS Windows with support for multiple monitors is available so the old gimp interface should be starting to make sense to people by now, even if you don't have multiple virtual workspaces.

Re:lol yea sure (1)

Threni (635302) | more than 4 years ago | (#33098320)

I have - it's shit. When's that new version coming out? Perhaps when it makes sense to anyone other than the devlopers it'll be allowed back into the standard Ubuntu distribution?

Bitching about gimp (4, Insightful)

fyngyrz (762201) | more than 4 years ago | (#33098630)

You know, you -- and 99% of the others bitching about the Gimp -- you're utterly full of shit. I write commercial image processing / editing / animation / generation software for a living, I'm expert - you can read that as "terrifyingly exert" - with Photoshop, Gimp and a whole raft of others... and Gimp is an easy to use powerhouse.

Now I will grant you exactly ONE thing, and that is, you need to sit down and learn to use it. That should take a few hours if you're familiar with something (anything) else; maybe a week hunting down tutorials, or a day hanging with a qualified mentor, if editing bitmaps is all new to you.

If it takes you longer than that, you're either stupid or lazy.

There's *nothing* significantly wrong with the Gimp. It has its limits, like everything does (Photoshop has some really annoying limits too), but for the vast majority of image processing and touch-up needs, it's very nice.

Oh, mommie, my crop function is in a different menu... Some people just need a good smack in the head.

If you really knew what you were doing, you'd have, and use, a whole suite of these programs, because for the big ones, there are areas where they excel, and that's the time to put them into play. If you can't learn to use them because the keystrokes are different, or there is a different paradigm... it isn't the program that sucks. It's you.

Also, if you actually knew how to use them, you wouldn't be bitching about them.

Re:lol yea sure (2, Insightful)

JWSmythe (446288) | more than 4 years ago | (#33098502)

    I only run into the occasional problem with GIMP. They really have come a long way.

    I switched from Photoshop to GIMP years ago. Photoshop kept crashing on my machine, and GIMP didn't. Then I found there were more things I could do with GIMP, so I stayed. Once in a while I try out Photoshop again, but I stay with GIMP. A few times, Photoshop folks have run into problems, so I tell them to just send me their file, and I fix it in GIMP and send it back. :)

    But hey, it's a holy war. Sides have been drawn, and there are zealots on both sides who trash talk each other. Don't ever try to convince someone that the other is better, because it'll just be an argument. I don't play holy wars. I try both sides, and use what works best.

    On the computer I'm using right now, it's a dual boot Windows 7 and Slackware64 machine. Windows 7 crashed yet again, with the only solution being "format and reinstall". Bah, I just did that a month before. Instead, I'm staying booted up in Slack64, and am very happy. My other copy of Windows is sitting in a VirutalBox window, which I only bring up for the odd occasions that I need to run a Windows only app. Will I convince a Windows user to switch to Linux? Probably not. Am I perfectly content? Yes.

Re:lol yea sure (0)

Anonymous Coward | more than 4 years ago | (#33098030)

Ding, ding ding... we have a winner! This description distills down the Gimp and its myriad issues into two sentences. Good job!

Re:lol yea sure (2, Funny)

Culture20 (968837) | more than 4 years ago | (#33098282)

It could be worse... the GIMP developers could have built it, in which case it would be a mostly working implementation of half the features of some existing software. However, nobody would realize this since only the developers would be able to comprehend the UI.

If you don't like the GUI, there's always the Lisp interface. If another OSS project gets named after a disability, I'm sure the gimp devs will incorporate it somehow.

Re:lol yea sure (0)

Anonymous Coward | more than 4 years ago | (#33097946)

Are you unaware of what an image-processing algorithm is? Unless its usage defies laws of physics it won't be bricking anything. Adaptations in proprietary implementations will more than likely appear first in the next phone generation or later models if app developers do not beat them to it by hacking (probably poor or minimal) support onto current phones.

Windows 7 (2, Funny)

nacturation (646836) | more than 4 years ago | (#33097822)

I bet it can remove the blur from the titlebar for screenshots of a Windows 7 app. Now we can all see what those developers are viewing behind that window!

Useful, but limited (4, Informative)

AliasMarlowe (1042386) | more than 4 years ago | (#33098262)

It won't help at all if the object is moving. In fact, this feature should be switched off if you're trying to photograph a moving object with the camera (common enough, and not just in sports). It would not be able to compensate for a mismatch between the object speed and your tracking movement, and would do entirely the wrong thing even if you tracked the moving object perfectly for the shot. In this case, there is no substitute for adequate light and/or a fast lens and/or a smooth accurate tracking movement.

As another comment, deconvolution requires a very accurate approximation of the true convolution kernel, which may be provided by the motion sensors. However, to reconstruct the image without artifacts, the true kernel must not approach zero in the Fourier domain below the Nyquist frequency of the intended reconstruction (which is limited by the antialias filter in front of the Bayer mask). In fact, if the kernel's Fourier transform has too small a magnitude at some frequency, the reconstruction at that frequency will be essentially noise, or will be zero if adequate regularization is used. If the motion blur is more than a few pixels, this will generally mean that the reconstructed image will have an abridged spectrum in the direction of blur, compared to directions in which no blur occurred. Of course, if your hand is so shaky and the exposure so long that blur occurs in all directions, then the spectrum of the reconstructed image will be more uniform. It is likely to be truncated compared to the spectrum of an image taken without motion blur.

The quality of the reconstructed image would also be limited by the effects of other convolutions in the optical pathway. For instance, if you're using a cheap superzoom lens, don't expect to get anywhere near the antialias filter's Nyquist frequency in the final image, as the lens will have buggered up the details nonlinearly across the image even before the motion blur is added. If you're using nice lenses (Canon "L" series or Pentax "*" series and suchlike), then this will not be an issue.

The method would seem to be useful in low-ish light photography of stationary objects. A sober photographer would beat a drunk photographer at this, but the technique would help both to some extent. A photographer using a tripod would do best, of course.

Re:Useful, but limited (1)

nacturation (646836) | more than 4 years ago | (#33098300)

And how often do you have an object moving beneath a Windows 7 titlebar?

Perhaps you should have started your own thread.

I think you oughta look at the examples. (3, Insightful)

Estanislao Martnez (203477) | more than 4 years ago | (#33098662)

There are some full-size samples of the results of the technique [microsoft.com] , where you can compare the original image with the result of their technique, and the results of two older techniques. Their technique show some very obvious problems:

  1. Doubling of high-contrast edges that are "ghosted" in the original because of the motion blur. In the original, presumably, the motion was something like this: start at position A, hold for a relatively large fraction of the exposure, then quickly move to position B, and hold for another large fraction of the exposure. This means that the photo records two copies of any high-constrast edges, one corresponding to A, and the other to B.

    There are several examples in the link that seem to be like that. The technique doesn't seem to figure this out in all cases, and renders the two ghost lines as separate, sharp lines. Most obvious example: the edge of the front rim of the red car in the second photo. Though compare with the result they got in the photo of the Coca-Cola cans, where it did figure it out for the rack, but not for the text on the cans, and where it introduced some artifact lines perpendicular to the rack.

  2. Severe white sharpening halos around edges.

The more instructive comparison is the results of these guys' techniques with the older techniques. Clearly, they're doing a lot better than the older techniques. Still, this is very far away from primetime, IMO.

Re:Useful, but limited (0)

Anonymous Coward | more than 4 years ago | (#33098824)

Holy shit, someone actually said something smart and correct and this site.

+1 internets to you sir or madame.

Re:Windows 7 (0)

Anonymous Coward | more than 4 years ago | (#33098686)

I guess any Microsoft bash will do around here... even if it sounds like the ramblings of 12 year old child.

Interestingly simple concept (3, Insightful)

Manip (656104) | more than 4 years ago | (#33097836)

This is like one of those "Why didn't I think of that?" ideas that you wonder why your camera doesn't already have. The nice part is that it can be done very cheaply (relative to the cost of a camera) and would improve images in many cases. My only tiny little concern is that you might introduce artifacts into your photos - which makes me wonder if it wouldn't be better to store a raw image and the data from these sensors independently? I wonder if there is a scenario where you might be moving but the object you're taking a picture of is stationary relative to your movement. Like for example you're standing on a boat rocking in the waves, you take a photo of the deck, and this technology compensates for the rock which results in a ton of blur.

Re:Interestingly simple concept (1)

Hast (24833) | more than 4 years ago | (#33097974)

This method (like all motion compensating algorithms) can correct for some motion blur but it will add other defects to the image. So while it might "save" a picture already captured it's better to take a new photo.

Re:Interestingly simple concept (1)

Carewolf (581105) | more than 4 years ago | (#33098026)

The concept is in fact so simple it has already been done. This is probably just a new enhanced(!) algorithm. I have a several years old digital camera which has the ability to compensate for moving or shaking the camera, this basic feature is just off by default on my camera, though some more idiot-proff cameras have it on by default.

Re:Interestingly simple concept (2, Informative)

Threni (635302) | more than 4 years ago | (#33098258)

This isn't IS like Canon (for example) has on its lenses. This is making a note of the movement and removing it later (where later could mean just after the pic is taken) rather than using gyros or whatever to prevent the shaking from affecting the picture in the first place. Perhaps both systems could be used, but I'm not sure, given that I'm not sure if it makes sense to use a note of how a camera was moved when the picture was taken at the same time that some of the movement has been compensated for - you might end up adding it back in again.

Re:Interestingly simple concept (1)

JWSmythe (446288) | more than 4 years ago | (#33098536)

    I usually like that feature. I had to turn it off on my video camera when I was doing a shoot a couple weeks ago. My hands aren't always steady, so it's nice having it fix that automagically. I set up for a tripod shot (filming a stage). It detected the motion on the stage as the still part, so the stage itself must have been moving, so it looked like I was unsteady. With the steady shot turned off, it came out perfectly. Well, until someone bumped my tripod, but there isn't much we can do about that other than beat down idiots who bump into your equipment.

    I swear, tripods must have a neon sign for dumb people that says "come trip on me!" I don't see it, but regardless of how well you guard your stuff, someone will take the first opportunity to trip on it.

Re:Interestingly simple concept (1)

Firehed (942385) | more than 4 years ago | (#33098028)

I imagine that if this tech does make it into higher-end cameras (namely SLRs), the accelerometer data will in fact be saved as extra data in the RAW file. In fact due to the nature of RAW files, I think it would have to be done that way. Naturally if you're shooting jpegs (phones, P&S, foolish SLR users), then you just take what you get and that's it. It will probably just become another part of RAW "development" for higher-end shooters.

Ultimately, the concept isn't very different than the image stabilization we already have. It's just cheaper to implement and probably somewhat less effective. Instead of moving the lens elements or sensor to counteract shaky hands and record a clean image in the first place, this just says how you moved and runs some smart algorithms to try undoing the damage you caused.

Android app in 3..2..1 (0)

Anonymous Coward | more than 4 years ago | (#33098062)

..and shame on you, camera manufacturers, for not thinking about this already!

non-inertial frame (2, Insightful)

martyb (196687) | more than 4 years ago | (#33098088)

My only tiny little concern is that you might introduce artifacts into your photos - which makes me wonder if it wouldn't be better to store a raw image and the data from these sensors independently? I wonder if there is a scenario where you might be moving but the object you're taking a picture of is stationary relative to your movement.

I suspect in the majority of cases, this would improve photos. As to your query, my first thought of a problematic environment would be trying to take a photo of a friend sitting next to you--in a moving roller coaster as it hurls around a bend. You and your friend are [mostly] stationary WRT each other, but you (and the camera) are all undergoing acceleration, which the camera dutifully attepts to remove from the photo. Certainly a comparatively rare event compared to the majority of photo-ops.

But let's make sure we understand what it does... (1)

Estanislao Martnez (203477) | more than 4 years ago | (#33098196)

This is like one of those "Why didn't I think of that?" ideas that you wonder why your camera doesn't already have.

Well, many cameras have optical vibration reduction, either on the lenses or using a sensor-shift mechanism. This mechanism, to the extent that it works, should work better than the software solution being described in the article.

It's important to understand that random camera motion blur in almost all cases leads to information loss. The rays of light that would have hit only one pixel if the camera had been steady, because of the motion, will end up hitting more than one pixel--whereas moving the lens elements or sensor tends to keep the same pixel aligned with the same point of the photographic subject.

My guess is that recording the motion of the camera and doing the post-processing described in the article will reintroduce some acutance [wikipedia.org] to the image (high-contrast edges will be sharper), but that there will still be a significant loss of resolution [wikipedia.org] (the finest detail that can be recorded). So, for example, the edge of a person's face will be reasonably sharp, but there won't be a lot of detail on the hair or skin.

Re:But let's make sure we understand what it does. (1)

Beardydog (716221) | more than 4 years ago | (#33098840)

What about combining the accelerometer data with a setting that records low-light images is a series of high-speed, underexposed images, then just using to accelerometer data to merge them?

Re:Interestingly simple concept (0)

Anonymous Coward | more than 4 years ago | (#33098198)

I haven't read the paper, but from the abstract it looks like the inertial sensors only "assist" in finding a parameter set which minimizes an "energy" describing the amount of blur. It is quite simple to see if the result of a deconvolution is sharper than the input, so in cases where the picture is of an object which moves with the camera, the image will most likely remain untouched.

This device+algorithm might improve pictures of static scenes, but it can't correct pictures of moving objects. I don't quite see what the principle difference between this and electronic anti-shake algorithms is. I always thought these already do deconvolution based on the output of accelerometers. Another option would be to take several pictures with short exposure time, shift them to remove full-frame motion and add them. If the camera can handle that, I'd imagine the result to be better than convolution, because deconvolution doesn't handle blown out highlights well.

Thanks to Steve Jobs? (0)

Anonymous Coward | more than 4 years ago | (#33097838)

Maybe I failed to catch it from the article:
At the annual SIGGRAPH show, Microsoft Research showed new technology that can remove the blur from images on your camera or phone using on-board sensors -- the same sensors currently added to the iPhone 4. But WTF? Steve please!!! It's not always about you.

Put it to good use (1)

thetoadwarrior (1268702) | more than 4 years ago | (#33097842)

There is a lot of poor porn out there from people that can't hold a camera still. Microsoft should redeem itself and sort that out asap.

CSI Miami (1)

PimpDawg (852099) | more than 4 years ago | (#33097844)

Finally, we'll have to quit making fun of the redhead cop every time he asks to zoom into a blurry license plate.

Re:CSI Miami (2, Insightful)

Dragoniz3r (992309) | more than 4 years ago | (#33098328)

Sorry, no. The blur in CSI Miami is not caused by motion, thus motion compensation won't help. That blur is just a sheer lack of pixels, and this algorithm does nothing to help that situation. CSI-mocking is safe.

Re:CSI Miami (1)

JWSmythe (446288) | more than 4 years ago | (#33098564)

You gotta love how they can take a single pixel, and come out with whatever they need. "If we [tap][tap][tap] zoom in on the reflection in the eye of the victim in the photo, we'll notice [tap][tap][tap] there is a mirror. In the reflection in the mirror is [tap][tap][tap] Oh, its a clear face which [tap][tap][tap] matches the DMV database in Austria for [tap][tap][tap] this bad guy!" Not bad for a shot accidentally taken from a camera phone as the victim was being murdered.

Frankencamera. (5, Interesting)

Greger47 (516305) | more than 4 years ago | (#33097858)

Step back! This is a job for Frankencamera [stanford.edu] . Run it on your Nokia N900 [nokia.com] today.

OTOH having that Arduino board and a mess of wires attached to your camera does score you a lot more geek cred than photographing using an plain old mobile phone.

/greger

Re:Frankencamera. (0)

Anonymous Coward | more than 4 years ago | (#33097900)

OTOH having that Arduino board and a mess of wires attached to your camera does score you a lot more geek cred than photographing using an plain old mobile phone.

Or on the other other hand you could just learn how to use your camera and not get such a shit load of blur in the first place.

Re:Frankencamera. (1)

EvanED (569694) | more than 4 years ago | (#33097964)

Or on the other other hand you could just learn how to use your camera and not get such a shit load of blur in the first place.

That's what I always tell people. The raw light they spontaneously emit from knowing what they're doing will go and light up that dim scene, thus letting them use a faster shutter. It's really quite simple.

Re:Frankencamera. (1)

OzPeter (195038) | more than 4 years ago | (#33098192)

Or on the other other hand you could just learn how to use your camera and not get such a shit load of blur in the first place.

That's what I always tell people. The raw light they spontaneously emit from knowing what they're doing will go and light up that dim scene, thus letting them use a faster shutter. It's really quite simple.

I thought this only worked when taking pictures over your shoulder and your buttocks were pointed at the subject?

Arduino board and a mess of wires... (2, Insightful)

offrdbandit (1331649) | more than 4 years ago | (#33097928)

Sounds like a great way to land a spot on a terrorist watch list, to me...

Re:Frankencamera. (1)

gmuslera (3436) | more than 4 years ago | (#33098620)

Is perfect for it. You have already most of the needed hardware already included, and you can install in it any needed software to play with the photo or the process of taking it. But the words "Microsoft Research" sound a bit ominous in the article. Probably the research have a big fat patent that say somewhere "and is forbidden to try this in open source operating systems".

Re:Frankencamera. (4, Informative)

slashqwerty (1099091) | more than 4 years ago | (#33098736)

It's worth noting that page nine of the Frankencamera team's paper [stanford.edu] mentions the work of Joshi et al when it discusses deblurring pictures. Neel Joshi [microsoft.com] was the lead researcher from the article we are discussing.

"No more blurry low light photos!" (3, Informative)

Average_Joe_Sixpack (534373) | more than 4 years ago | (#33097872)

Social networking sites are about to get a whole lot more ugly

Re:"No more blurry low light photos!" (0)

Anonymous Coward | more than 4 years ago | (#33098296)

You're forgetting the third law of socialdynamics: image definition is inversely proportional to the standard of beauty.

(I've just violated the first two laws by telling you that...)

comparison to other methods? (3, Interesting)

supernova87a (532540) | more than 4 years ago | (#33097886)

I recall that some other cameras, like a Casio I've seen a friend using, also do deblurring, but rather by stacking of rapid subframes (I guess using bright reference points). If I understand correctly, this new method is operated on a single frame. I wonder if anyone has a useful comparison of the hardware requirement/image quality/useability differences between the two methods?

Re:comparison to other methods? (1)

Threni (635302) | more than 4 years ago | (#33098276)

> I wonder if anyone has a useful comparison of the hardware requirement/image quality/useability differences between the two methods?

If you have a proper camera, taking a 12+ meg image at a high shutter speed, and perhaps also a high ISO, and flash, then there'll be no time for multiple pictures to reduce blur.

Re:comparison to other methods? (1)

zippthorne (748122) | more than 4 years ago | (#33098678)

What if you just take a quarter of those pixels, four times, and then march them out, as in the mentioned-not-too-long ago high-speed-photography trick later incorporated into CHDK?

Re:comparison to other methods? (1, Insightful)

Anonymous Coward | more than 4 years ago | (#33098344)

Not surprisingly, the title is somewhat inacurate. Blur can be caused by several things. One is movement of the camera while the "shutter" is open. That is the one ms has a solution for. I probably would have called it digital image stabalization or something.

So an alternative is optical (more accuratly would be mechanical) image stabalization.

Pretty neat, but it wont remove any kind of blur unfortunately...

Okay. (2, Insightful)

kurokame (1764228) | more than 4 years ago | (#33097890)

Great, you can improve your motion blur removing algorithm by recording the motion which created the blur.

Although technically, the blur in the image itself already recorded the motion, with better precision and without calibration issues. So this is more of supplementary data. The before and after images leave out the whole "you can already do this without the extra sensor data" aspect.

And really, you'll get far better results if you just use an adequately short exposure time and some mechanical stabilization. Brace your shooting arm. If you want to get fancy, use something like Canon IS lenses.

Yeah, this is nifty, especially for smartphone based cameras which may already have built-in sensors to do this. But neither is it exactly revolutionary. You'll get better photos out of learning some basic photography than you will out of fancy sensors and analysis software.

Re:Okay. (1)

mark-t (151149) | more than 4 years ago | (#33097926)

I think that the idea is that this would be intended for everyday point-and-shoot cameras that are usually hand-held.

Re:Okay. (4, Insightful)

profplump (309017) | more than 4 years ago | (#33097968)

This isn't for people who want to learn photography and take good pictures, it's for people who are shooting their friends in a bar at night to post on Your Face in a Tube and laugh about for a week before being forgotten -- it's merely intended to allow point-and-click shooting work more reliably in poor conditions on cheap equipment with inattentive and untrained operators.

Re:Okay. (2, Interesting)

sker (467551) | more than 4 years ago | (#33098008)

Agreed. I feel the same way about auto-focus.

Re:Okay. (1)

timeOday (582209) | more than 4 years ago | (#33098688)

I disagree; autofocus is usually better than manual even if you have both - especially if your only image preview is on a relatively low-res LCD, but also if the subject is moving (in macro shots a little subject movement can *completely* de-focus the shot). And face recognition is one of those "blingy"-seeming features that actually makes sense, since in an image with with objects at various focal depths, usually you want the face. In cases where that's wrong, a focus lock button allows you to autofocus at whatever depth you want and then re-frame the shot. The remaining need for manual focus is very small IME. But I am curious why you find it necessary?

Re:Okay. (3, Insightful)

kurokame (1764228) | more than 4 years ago | (#33098066)

That would be a great point if it involved learning something more complicated than bracing your hand.

Also could well help pros (1)

Sycraft-fu (314770) | more than 4 years ago | (#33098244)

There are real limits to the human body. Anyone who says "I can hold a camera perfectly steady," is lying. We are not perfect platforms. So image stabilization can help a lot. Long range photography, in particular of fast moving objects like in sports, got a big boost when optical image stabilization came out. The length that you could zoom and still get a good shot increased. Wasn't that the photographers were bad, it was that they were at the human limits. The optical stabilizers enhanced that are upped the limits. After the fact deblurring could up the limits more.

Re:Also could well help pros (1)

JWSmythe (446288) | more than 4 years ago | (#33098644)

There are a lot of limits, including the human factor. Photography is one, but try target shooting (like, with a gun). You'll never see someone who can put 10 shots at 100 feet into the same hole. If they get two, it's dumb luck.

    For cameras, sometimes there are extreme examples. I put my Nikon D90 onto my telescope (Newtonian). I was shooting using a USB cable to my laptop, so I could use the laptop as a remote trigger, and set the camera to lift the mirror, so it wouldn't shake. When looking at the moon, I could only see about a quarter of it. Due to the movement of the earth and moon, along with the long exposure, and a little motion in the telescope from lifting the mirror, they turned out blurry. This is my first moon shoot. [flickr.com] I know there's ways to do it better, this was just my first attempt. I had a clear night, with a bright moon, and some spare time on my hands. :)

Re:Okay. (0)

Anonymous Coward | more than 4 years ago | (#33098120)

You're right. The blur in the image itself is a recording of the motion. It's not so easy to extract this data from the image though. Perhaps you can point me in the direction of some current and promising research on this topic? Also, if you already have the motion recorded and available you don't have to analyze the image to extract the motion so it's faster. Not all research or progress is revolutionary, but that doesn't mean it's uninteresting or useless. And there are situations where you just can't brace your arm or control the light, etc.

Are you sure of that? (1)

Estanislao Martnez (203477) | more than 4 years ago | (#33098146)

Although technically, the blur in the image itself already recorded the motion, with better precision and without calibration issues.

Not unless you know the sizes of all the objects that made it into the frame, and the distance of each to the camera. The camera, without this sort of motion sensor, at most knows the focus distance and view angle, so *maybe* it can guess the height and width of objects at the plane of focus--but then the problem becomes knowing which pixels are recording the object at the plane of focus.

But anyway, if you know the focus distance, view angle and camera motion, you can apply some corrections that are likely to improve acutance of objects at the plane of focus, and maybe regain some resolution. It wouldn't turn a photo with motion blur into the equivalent of one without, because motion blur causes information loss, which will show up as loss of resolution. Or in other words, the correction will probably make large high-contrast edges look sharper, but there will be some loss of fine detail due to the motion.

Re:Okay. (0)

Anonymous Coward | more than 4 years ago | (#33098214)

You'll get better photos out of learning some basic photography than you will out of fancy sensors and analysis software.

I see you don't know shit about people. I guess that's what the real Slashdot effect is... raving geeks who don't understand enough about the real world to have their ideas ever see fruition.

There is a million things the man on the street *could* be doing today that is not only technically sounds but makes a ton of sense for everyone involved but the bulk of humanity doesn't care enough to do a single thing that takes them off their path of work, eat, watch tv, shit, die. Most people can't be bothered to learn about who they vote for, if they even bother to vote and you want them to learn basic photography so they can take better pictures when there is a technological solution already in place? Who the fuck are you fooling with that kind of talk?

Re:Okay. (1)

mobby_6kl (668092) | more than 4 years ago | (#33098330)

And really, you'll get far better results if you just use an adequately short exposure time and some mechanical stabilization. Brace your shooting arm. If you want to get fancy, use something like Canon IS lenses.

Yeah, this is nifty, especially for smartphone based cameras which may already have built-in sensors to do this. But neither is it exactly revolutionary. You'll get better photos out of learning some basic photography than you will out of fancy sensors and analysis software.

Maybe you should learn more about photography, then you'd know that it's not always possible to use sufficiently short exposure time to get a perfectly sharp result. For example if it's too dark, the lens isn't fast enough, or the conditions changed too quickly to adjust, among countless other scenarios where such a technique could be useful.

Re:Okay. (1)

jellomizer (103300) | more than 4 years ago | (#33098714)

So you just hate technology used for new application. Guess what people can't be good at everything. That is why technology exists. It spent need to replace the expert but it alows the novice to get the jobs done easier

Re:Okay. (1)

nine-times (778537) | more than 4 years ago | (#33098802)

Although technically, the blur in the image itself already recorded the motion, with better precision and without calibration issues.

In order to have any hope of getting that motion information from the blurred image, wouldn't you have to also have the image of what the image is supposed to look like without the blur?

And really, you'll get far better results if you just use an adequately short exposure time and some mechanical stabilization.

Well that's the whole problem, right? Short exposure times mean dark images, long exposure times mean blur. Sure, you can set up a professional camera with a tripod and do it the right way, but what about the rest of us who just want to take the occasional picture on a cheap camera without thinking about it? That's most of us.

Blacked out Canon logos (-1, Flamebait)

Anonymous Coward | more than 4 years ago | (#33097942)

Microsoft, you can black out the brand logos, but you forgot to black out the red stripe on that L-Series lens. Dorks.

Also, if you want working de-blurring, try turning the lens's image stabilization on. This is something better suited to the optics than the sensor, and Canon and Nikon both do a very good job with image stabilization. All doing it in-camera will do is suck processing power.

Re:Blacked out Canon logos (2, Insightful)

mark-t (151149) | more than 4 years ago | (#33097958)

Is the camera needing to do something else at the time that "sucking processing power" is some sort of issue?

Re:Blacked out Canon logos (1)

AnarkiNet (976040) | more than 4 years ago | (#33098234)

Maybe he folds on his phone. Who knows, this is slashdot after all.

Re:Blacked out Canon logos (1)

Firehed (942385) | more than 4 years ago | (#33098068)

Yes and no. There are limitations to how quickly and accurately the physical IS systems can work. Overall they're fantastic and well worth the premium if you're a serious shooter, but this could provide a much cheaper alternative that could be nearly as effective. Also, provided you have sufficiently accurate accelerometer data, you could reprocess the RAWs as deblurring algorithms improve for better results (check out the difference in noise reduction in the latest version of Adobe Camera RAW). This could also be really effective to fine-tune the IS done in-lens.

Re:Blacked out Canon logos (1)

EvanED (569694) | more than 4 years ago | (#33098292)

Also, if you want working de-blurring, try turning the lens's image stabilization on. This is something better suited to the optics than the sensor, and Canon and Nikon both do a very good job with image stabilization.

The question, IMO, isn't so much "is this doing the same thing as in-lens IS?" (Or, IIRC and you've got a Minolta, in-body IS.) The question is whether you can get any additional benefit over what IS gives you. From what I've heard, present IS systems give you about an extra stop of exposure time over what you'd be able to do without it. Does this give you two extra stops over no IS? If so, it's worth having as an option.

Re:Blacked out Canon logos (1)

am 2k (217885) | more than 4 years ago | (#33098546)

There's an old rule: Never do something in hardware that can be done in software. Just imagine that you could put this stuff easily into cellphones, which would never include a image stabilization as it is used in DSLRs right now, because it's too bulky and too expensive. All just with a software update.

Re:Blacked out Canon logos (0)

m.dillon (147925) | more than 4 years ago | (#33098636)

Well, the algorithm they are using is real enough, but that is a high-end Canon DSLR. The ultrasonic logo on the lens is clearly visible. Which means these guys have a hell of a lot of low-noise pixels to work with, and it also means they have very fine control over the number of pixels the blur can cross.

How to remove camera shake with a DSLR, 4-step plan:

* Use a high-end DSLR which can take pictures at ISO 3200 with the same noise content of point-and-shoots at ISO 400. 3 stops.

* Use a fast L series prime lens (like, say, a 50mm F1.2L), or use an IS lens. That's another 3 stops.

* Use a camera with 20+ low-noise mega pixels. Then reduce the pixel count to 0.5x on each axis. Hell, this is a high-end Canon, you might as well reduce the pixel count to 0.25x. 2 more stops.

Uh.. how many stops so far? 8 stops so far. That isn't enough? WAIT, THERE'S MORE!

The single best way to reduce Camera blur with a high end Canon or Nikon DSLR ... (drum roll) ....
HOLD DOWN THE SHUTTER BUTTON AND TAKE 5-7 SHOTS. Then pick out the best one in post-production. Tada!

Camera shake is one thing. Blur from Subject movement is quite another. When taking photos in low light there is a point where camera shake becomes irrelevant.

-Matt

Automatically deblur? (0)

Anonymous Coward | more than 4 years ago | (#33098038)

This is hardware that takes into account movement of the camera as you take a picture. Meh... I thought this was a software solution that could deblur a picture after it has been taken. This technology exists but I guess it's more used to get at redacted information rather than make a picture clearer. Seems like it could work though.

Poor mans IS (1, Insightful)

Anonymous Coward | more than 4 years ago | (#33098048)

While it's a nice idea, isn't this just a poor man's image stabilization? Even cheap compacts come with some form of IS these days, and high end SLR lenses certainly do.

Re:Poor mans IS (1)

OzPeter (195038) | more than 4 years ago | (#33098226)

While it's a nice idea, isn't this just a poor man's image stabilization? Even cheap compacts come with some form of IS these days, and high end SLR lenses certainly do.

I think that the key point here is the 6DOF measurement of the camera movement. I will admit to not knowing about IS, but I would guess that it doesn't handle 6 DOF.

MicroSoft is impressive at SIGGRAPH (5, Interesting)

peter303 (12292) | more than 4 years ago | (#33098078)

For the past 8 years or so, MicroSoft has been co-author on more papers than any other organization at SIGGRAPH. This is impressive because SIGGRAPH has a the highest paper rejection rate of any conference I know of - they reject (or downgrade to non-published session) 85% of the paper submissions. And you have to submit publication-ready papers nearly a year in advance, with a video summary.

This reminds me of Xerox PARC - great R & D output, poor commercialization of these results. People wonder if their lab was a toy-of-Bill or a tax write-off.

Re:MicroSoft is impressive at SIGGRAPH (1)

SteeldrivingJon (842919) | more than 4 years ago | (#33098664)

"This reminds me of Xerox PARC - great R & D output, poor commercialization of these results. People wonder if their lab was a toy-of-Bill or a tax write-off."

I suspect the idea is mainly to keep the people from going elsewhere.

Re:MicroSoft is impressive at SIGGRAPH (1)

dbIII (701233) | more than 4 years ago | (#33098832)

Microsoft used to get mercilessly flamed by the rest of the industry for doing no research at all and just acquiring companies with technology, ripping off the ideas of others or entering into dodgy contracts to licence technology (eg. Spyglass if you give us that web browser we'll give you a percentage of every copy of IE sold - only we're giving it away free suckers!). They couldn't keep plundering startups and copying apple forever and still be viable so they started Microsoft Research. Some good ideas have come out - some such as clippy were implemented very badly after they left the lab.

Probably not "new" tech.. (0)

Anonymous Coward | more than 4 years ago | (#33098134)

This is probably not "new technology".. just Microsoft's version of image stabilisation that "real" camera companies have been using for years. I sounds just like the in-camera image stabilisation used by many point & shoot cameras, and some dSLR's... and it would be VERY like Microsoft to copy someone else's technology and pass it off as a new thing. (just look at the Mach Kernel which underlies NT, 2000, XP, 2003 etc.. versions of Windows.)

Now we'll know for sure that it was... (1)

Tablizer (95088) | more than 4 years ago | (#33098406)

...a flying chair.

I am not japanese (1)

dx40sh (1773338) | more than 4 years ago | (#33098458)

This is going to revolutionize the hentai industry.

Kinda ridiculous (1)

Ancient_Hacker (751168) | more than 4 years ago | (#33098506)

The whole premise seems kinda ridiculous. You might have some idea how the camera swung, but that only helps you if you're pointing at some 2D surface that's perpendicular to the camera.

If there is any depth to the scene, points closer will move more than points farther away. You might have an estimate of the distance from the auto-focus feature, but that's only going to help you fix up points near the focus sweet-spot. Points closer and farther away are going to be made worse, not better.

Now they just need to attach this to Ballmer's (5, Funny)

melted (227442) | more than 4 years ago | (#33098608)

Now they just need to attach this to Ballmer's head to deblur the company vision a little.

Information théory (3, Interesting)

Vapula (14703) | more than 4 years ago | (#33098646)

Information théory tell us that once some info has been lost, it can't be recovered. If the picture has been somehow "damaged" by some motion blur, the original picture can't be reconstructed.

On the image, we'll have much more than the motion blur from the camera's movement :
- noise added from sensor electronic noise
- blur from target movement
- distortion coming from lens defect (mostly for low end cameras)
- distortion/blur from bad focus (autofocus in not perfect) ...

The operation that will reduce the camera's motion blur will probably increase the effect from all other defects. You reduce one kind of image destruction and increase the impact of the other one.

Load More Comments
Slashdot Login

Need an Account?

Forgot your password?