Welcome to the Slashdot Beta site -- learn more here. Use the link in the footer or click here to return to the Classic version of Slashdot.

Thank you!

Before you choose to head back to the Classic look of the site, we'd appreciate it if you share your thoughts on the Beta; your feedback is what drives our ongoing development.

Beta is different and we value you taking the time to try it out. Please take a look at the changes we've made in Beta and  learn more about it. Thanks for reading, and for making the site better!

Using The GIMP (or Photoshop) to Improve Photos?

Cliff posted more than 7 years ago | from the ccds-just-hate-solid-colors dept.

Graphics 111

Nom du Keyboard asks: "Is it possible to use The GIMP (or Photoshop) to improve my digital photos? I have a mid-range 7.1MP Olympus camera capable of shooting in Raw mode. When I inspected a section of clear blue sky on a bright, sunny day (which I've long believed to be relatively good reference of uniform color and brightness) I was surprised (disappointed, since I expect digital perfection) at the variance in adjacent pixels. It's also a quick way to identify any bad pixels. Surprisingly, actual photos from this camera look pretty good despite this variance so far. Moving on from that point it led me to wonder that, if you shot a uniform white surface, perhaps blurred as much as possible to avoid any imperfections in the surface itself, could a correction (adjustment) layer be created in GIMP or Photoshop exactly tuned to your camera that fixed the variations in your CCD sensor and improved the image quality in the process. Any thoughts?"

Sorry! There are no comments related to the filter you selected.

try it (2, Insightful)

tverbeek (457094) | more than 7 years ago | (#17784094)

I don't know. Why don't you try it?

Re:try it (2, Informative)

smallfries (601545) | more than 7 years ago | (#17784412)

I'd try it twice, with two different but supposedly uniform surfaces. I'll bet that the fluctuations in pixel intensities aren't uniform across both surfaces as they're not caused by a systematic bias in the CCD. Rather they are caused by random noise in the circuit.

If it turns out that there is a systematic bias (ie one that you can correct in the GIMP with a static image) then you would be best off taking a picture of something as black as you can make it. The inside of a bad should do. And then as light as you can make it (not really sure about this one - lighbulbs maybe). From the two masks you can make the image that you want without needs to make a perfect surface first.

"as bright as possible" is useless (1)

n3k5 (606163) | more than 7 years ago | (#17789806)

You're right, the bias isn't systematic and it won't work. My favourite way of getting an exposure as dark as possible is to use a camera that can shoot with the shutter closed; some can do that automatically right after a normal exposure in order to detect hot pixels. However, if you take an exposure that is "as light as possible", every pixel will be over-exposed to the max, so you'll get zero information. (Except you have a sensor with a really unusual fault.)

Re:"as bright as possible" is useless (2, Interesting)

fbjon (692006) | more than 7 years ago | (#17791772)

Many cameras do this automatically as you say, like my FZ30, for instance. It's called a dark frame, and it simply closes the shutter and takes a dark image right after the actual shot, using the same shutter speed, then subtracts it from the original using some algorithm. This will take out hot spots that are mostly consistent over a short period of time, but won't touch any other noise.

Replying to TFA:

surprised (disappointed, since I expect digital perfection) at the variance in adjacent pixels.
Digital perfection does not exist! You (the submitter) are taking images of the real world, where light moves around somewhat randomly in energy packets called photons, not in perfect rays. Noise can not be eliminated, ever. There's also some noise from the electronic components of the camera itself, which you also have to live with, unless you get a better quality camera. Or use some careful noise reduction. You do have the option of creating digital perfection, though. 3ds Max is popular, I gather. :)

If color variance is the problem, however, that's due to the CCD design. The CCD in nearly every camera is a single chip for all three colors (4 in some rare cases), but a single photosite ("pixel") can only detect one color. This means the sites need to be mosaiced in a regular pattern, usually RGBG, which is then decoded into a raster image like JPEG. For your 7.1 Mpix camera, than means about 3.5 Mpix resolution for green golors, and 1.8 Mpix for red and blue each. This can cause colored blotches in supposedly even areas, but this kind of noise is fortunately really easy to remove with any decent noise-removal plugin. Perhaps it's possible to avoid this noise in the conversion from the raw data, though?

Most importantly, inspecting the noise of a camera by oogling at 200% isn't very useful, look at larger areas instead. Most noise disappears when put into its normal scale for viewing.

To answer the last question in TFS, yes, if there are bad pixels, it's not hard to find them and create a mask or an action (in PS) that eliminates them. You can also take a noise print for your camera at different ISOs/shutter speeds (at least in Neat Image), and store them for later use in other photos, so you don't need to analyze the noise over and over again. Again, if you want less noise to begin with, make sure you're using the lowest ISO setting that gives you a usable shutter speed, or get a camera with larger sensor area, that can capture more photons in the same amount of time. Also remember that some noise is good noise. A noiseless picture tends to look a bit unnatural, so don't try to remove all of it.

Re:try it (1, Interesting)

Anonymous Coward | more than 7 years ago | (#17784836)

Because this is Ask Slashdot, where users try not to do any real thinking, instead seeing if people have done it for them.

I think that this section has had 1 good question in the last month.

Re:try it (1)

pijokela (462279) | more than 7 years ago | (#17788590)

While you are absolutely right in your diagnosis of Ask Slashdot: what is wrong with that? Isn't it wonderful that you can get other peoples opinions on questions and you don't have to figure all the answers yourself?

If our culture was based on figuring everything out ourselves, we'd be still living in the caves learning to catch deer for a living, unless we had already gone extinct trying to figure out how to raise children by ourselves.

As to the subject at hand: is it realistic to expect all the pixels to be same color even if the target area appears to have a uniform color? Not all the colors in the world are contained in the RGB numbers and so it's quite understandable that the different pixels get (slightly) different values as approximations of what is really there.

Re:try it (0)

Anonymous Coward | more than 7 years ago | (#17792062)

The problem is that too many of the AS questions are now way too elementary. I mean, this question, IMHO is a joke.

And I see nothing wrong w/ tapping the minds of Slashdot users. But I think there is a limit. For example, it was a while back there was a question asking something like "What is a good software package for doing X?" I think that is a major waste of time on /. Why not go out and Google it? It is certainly faster than posting on Slashdot. And any meaningful discussion that you spark on Slashdot, it has already been done 100 fold at probably a more pertinent site, and much more in depth.

Now, I will say my 1 in a month comment - not true (extreme hyperbole), but still there have been some questions that really just should not make the site.

As far as the culture comment (figuring it out ourselves) - no, we would not be living in caves. The idiots who are too lazy to figure it out would still be in caves. I would still be driving my car to work everyday, and I will take my kids to the Zoo to see those that didn't figure it out.

As far as the Photoshop issue - no, that clearly is not realistic to expect. But is that a question that /.er should answer, or should that be posted on a Photoshop forum?

(Same AC who posted 2 comments ago)

Interesting idea (3, Insightful)

Aladrin (926209) | more than 7 years ago | (#17784096)

That's an interesting idea, but it assumes some pretty clean conditions. The light has to be absolutely the same over the entire surface, it would probably need to be blurred as you said, the surface would have to be absolutely the same color everywhere (no dust, no marks), the surface would to be completely non-reflective, and probably some other things that I haven't thought of. It would be extremely hard.

It also assumes that the variations are always the same, and that the variations in your photos are from defects and not from the natural color differences in the real world and the digital camera's attempt to map them to a very restricted color palette.

Re:Interesting idea (2, Informative)

Goeland86 (741690) | more than 7 years ago | (#17785516)

that's not too hard. Get a projection screen with a spotlight aimed at it. There's your screen. Then you can create your layer fairly easily, no?
Any meeting room, or multimedia classroom will have one of those, anywhere with a projector will work. You just need a pair of tripods, your camera and a fairly powerful wide range spotlight. Done!

Re:Interesting idea (1)

JabberWokky (19442) | more than 7 years ago | (#17789962)

Spots (at least theater lighting instruments, which is what I have experience with) aren't even at all. You can easily see variances if you look close. Most of theater is about not looking too close, so it works out.

Evan "I have a follow spot in my kitchen right now"

Re:Interesting idea (1)

CastrTroy (595695) | more than 7 years ago | (#17786716)

How do you get a surface that is white, but is all completely non-reflective? Doesn't the fact that you see it as white means it reflects white light (or all colours if you want to get technical). A surface that is completely non-reflective would be black, although just about every surface i've seen will reflect some light.

Re:Interesting idea (0)

Anonymous Coward | more than 7 years ago | (#17788162)

You cheat and cross polarize. You light it with two lights that are polarized one way and shoot it with a filter polarized across them.

It's out there. (1, Informative)

Anonymous Coward | more than 7 years ago | (#17784124)

My digital rebel xti has software that lets you do what you describe. It's used to correct blemishes caused by dust particles on the sensor. In a regular imaging tool you could probably work out a similar fix by creating a mask that does some enhancements and whatnot where there are darker pixels. Just a guess, I've never done it.

Learn about photography (5, Informative)

linuxbert (78156) | more than 7 years ago | (#17784158)

What you describe is normal, and your question exhibits a lack of understanding about white ballence.
essentially, if your white is right, then all the other colors will be as well. your camera has several settings to compinsate for various light types (Tungsten, Flourescent, Daylight)Yours is probably set to AWB (Auto) which is easy - as the camera will figure it out pretty well and a Custom - which you can configure based on the lighting by shooting a grey card - which is a card that is 15% grey (Or there abouts) that the camera can then use to figure out what true white is.

The variation in pixels can also be the result of the ISO setting you are using. 100 has the least noise, but also requires longer exposures. higher settings react faster, but have more noise (400,800,1600) This is a tradeoff between desigered exposure and ambiant light.

I would suggest reading Strobist [] for more on lighting. There are also several other sites dedicated to post processing images, that you may find helpfull. it also might be worth looking at the various pool discucssion groups on Fliker.


Re:Learn about photography (5, Informative)

Frobnicator (565869) | more than 7 years ago | (#17784538)

You are right that light balance and natural noise are both very important.

Take for example the camera-assistant production slates (those little boards you see movie makers use with the clapper on top). They do a lot more than just showing the script location and film location, but they also have little black and white (and gray) lines on the clapper. Those are amazing tools that are deceptively simple. The clapper makes a sharp noise that lets you sync and balance the audio, digital boards will record the sync for individual film frames, and the lines provide for image calibration.

The black, gray, and white boards allow you to balance the brightness in post production exactly the way the original post was looking for.

Most boards also have calibrated colors to help balance those, as well.

Shooting slate is a very important step in good photography, both for stills and motion pictures.

And to the posters suggesting trying to eliminate all natural noise in photos, you don't really understand what you are talking about. Your eye expects noise in the real world.

Photos need natural noise, they look unnatural or cartoonish without it. Traditional photographs are full of noise because the silver halide gelatin and other chemicals are not perfectly uniform. The chemicals naturally clump up and form noise. (This property makes it easy to identify tampered photos since the natural noise is different between two areas.) Even digital photos get noise when you print them or display them on your screen. If your camera automatically smoothed out all the noise, the image would look like a cartoon or a naively ray-traced image.

As far as using image editing apps such as the GIMP or Photoshop, yes they are able to do a great job with digital images but they are limited by the knowledge and skill of the human using them.

Re:Learn about photography (1)

gwait (179005) | more than 7 years ago | (#17786438)

True, real images have noise in them, but you don't need the camera adding extra noise that is not there in the "real" image.
There are many types of noise from an image sensor. If the camera has a fixed pattern noise, then yes one could take a shot of a uniform white background, lower it's digital intensity until the lowest value is zero, and then subtract that noise pattern image from a digital photo, (at the same exposure duration as the noise pattern image) to clean out the pattern noise. Perhaps some of the higher end cameras already do this?

The other noise sources are typically random, therefore can not be subtracted out with the same method.

A good CCD (in a well designed camera) is typically lower noise than a CMOS sensor, but I'm talking scientific grade cameras. I have no idea if consumer grade CCD cameras are noticably better than the CMOS cameras..

No camera in the world will give you exactly the same level in each pixel, given even a high quality uniform light source, (which you need expensive gear to attain).

Re:Learn about photography (1)

flewp (458359) | more than 7 years ago | (#17789638)

True, real images have noise in them, but you don't need the camera adding extra noise that is not there in the "real" image.
So true. The problem I've seen a lot of people struggle with is distinguishing between camera noise (the added noise you speak of), and the real, natural noise. I've seen more than one example of someone applying a filter like Noise Ninja to an iamge, only to get the cartoony effect that Frobnicator was reffering to. For instance, think of a picture of a road. The concrete/asphalt/etc surface is going to have what appears to be noise (the natural bumps, color variations, etc in the road surface), while the sky may have noise introduced by the camera. Say the sky has a lot of noise, so the user goes and bumps up the strength, smoothness, and other settings of Noise Ninja to remove this. The result is that the road also gets blurred, and looks very unnatural. The key is to only apply the filter to the areas that need it, as opposed to the whole image.

Re:Learn about photography (1)

X-treme-LLama (178013) | more than 7 years ago | (#17786724)

Quick technical modification, usually light sensors are looking for 18% grey. At least that's what I have always been taught.

However, there is an interesting post I just uncovered here [] that discusses the true standard as being 12%.

Of course perhaps you knew this and picked the middle ground? :)

Learn about spelling (0)

Anonymous Coward | more than 7 years ago | (#17787330)

ballence, compinsate, ambiant, helpfull, discucssion, Fliker...

good god man.

Noise ninja (4, Informative)

Illusion (1309) | more than 7 years ago | (#17784164)

See Noise Ninja [] for well-known commercial software that does this. There's apparently now also a Linux version.

While playing with it a while ago, I found that JPGs compress something like 25-33% better after you remove the CCD noise. Improving the image quality while making the images take less space seems like a nice combination. :)

This seems like it would be great to get in the hands of more people as a free software app or plugin, but I'm not aware of any.

-- Aaron

Or GREYCstoration (4, Interesting)

dschl (57168) | more than 7 years ago | (#17784296)

GREYCstoration [] . Ugly name, but does the same job, and is open source. Haven't tried it, but there appear to be several plugins for various open source digicam programs and image editors (bottom of their downloads [] page).

Re:Noise ninja (1)

MattBurke (58682) | more than 7 years ago | (#17785068)

Bibble [] (runs on Win/Linux/Mac) includes Noise Ninja and has a host of other features. Lightning quick on raw formats too... One of the few bits of software that's actually worth paying for!

Re:Noise ninja (1)

dascandy (869781) | more than 7 years ago | (#17789462)

Just a small tidbit: The bits that setting your JPEG encoder to a decent quality should do is cut off high-frequency mess. You use another tool to do that so effectively you're encoding your image as something very much like JPEG, then decoding it and encoding it at "higher quality" which effectively does null with that data, figures out it's a string of 0's and rips them out.

Re:Noise ninja (1)

Illusion (1309) | more than 7 years ago | (#17791094)

That would be true if Noise Ninja or similar were indiscriminately blurring the image by stripping all high-frequency data, such as just using a lower-quality JPG setting. However, by removing just the CCD noise and leaving the rest alone, JPG actually encodes more of the high-frequency data that I care about. The resulting image turns out much more detailed than doing what you suggest.

Good idea, but use black instead of white. (5, Informative)

NereusRen (811533) | more than 7 years ago | (#17784166)

Using the sky or a white piece of paper may be interesting, but it probably won't give you anything you can use to calibrate the rest of your photos.

A better bet for isolating the noise your camera generates is to take completely black photos, using the lens cap and some extra covering (and a dark room) to make sure absolutely no light hits the sensors. This will let you make raw images of the "dark noise" and "bias noise" that your camera generates, and subtract those images from your real photos before doing any other processing.

Details of this method can be found here: [] .

Re:Good idea, but use black instead of white. (2, Informative)

zippthorne (748122) | more than 7 years ago | (#17784426)

Better cameras already do this, by taking a picture with the shutter closed. You can sometimes select between taking a "dark frame" for every picture and taking a single "dark frame" to apply to all subsequent frames. Best to read the manual for your particular model to see if you have these features.

Re:Good idea, but use black instead of white. (0)

Anonymous Coward | more than 7 years ago | (#17788446)

Also keep in mind that the noise your camera generates varies with temperature. A more-or-less set temperature like a studio or across a single day is all well and good, but for everyday shooting, you'll need to calibrate fairly often.

Yes. (3, Interesting)

oskard (715652) | more than 7 years ago | (#17784168)

If we saw a sample of the photos, it would be easy to determine if they could be fixed. Its hard to understand what the exact problem is from a text description, but the general answer is: Yes, anything can be done with The GIMP / PS.

Re:Yes. (1)

limecat4eva (1055464) | more than 7 years ago | (#17785422)

Anything can be done, but sometimes you really don't want to step pixel by pixel through the entire image and edit RGB values individually.

Re:Yes. (1)

shaitand (626655) | more than 7 years ago | (#17785956)

Sounds like you lack dedication. Real men can take it.

Re:Yes. (1)

qzulla (600807) | more than 7 years ago | (#17787360)

Anything? Then show me how to do Marshall Oils in GIMP. I still have not figured it out.



Yes, anything (1)

r00t (33219) | more than 7 years ago | (#17787548)

Even if you left the lens cap on, you can repair the image the The GIMP. It does take some manual editing of the RBG values at each pixel location though.

Yes Exactly! Only Backwards.... (5, Informative)

DonnarsHmr (230149) | more than 7 years ago | (#17784172)

You're almost right. The method you're using is called Dark Frame Subtraction. The idea is that you photography the non-random noise inherent in the sensor and then take that out of the captured images. To do this, you make an image that is completely black (i.e. body cap on the front of the camera and viewfinder cover on the back) at the same temperature conditions and for the same length of shutter speed as the image you are trying to fix. Then you add that as a layer in photoshop, subtract it from the real image, and the non-random noise disappears.

However, it is MUCH more likely that the noise you are complaining about is random thermal noise, which is not treatable via Dark Frame Subtraction. Because it's, well, random noise, it'll be different in every shot. There are several photoshop plugins that can address this issue. In my opinion, the most effective and easiest to use of them is Noise Ninja.

Re:Yes Exactly! Only Backwards.... (3, Interesting)

hankwang (413283) | more than 7 years ago | (#17784500)

Then you add that as a layer in photoshop, subtract it from the real image, and the non-random noise disappears.

I doubt that that will work. Once in the computer, the pixel values are not proportional to the absolute brightness, see gamma correction [] on wikipedia. You would need to do the substraction on linearly encoded data (12 or more bits rather than 8). Maybe photoshop can indeed do this, provided you find the right settings, but GIMP as far as I know doesn't.

Re:Yes Exactly! Only Backwards.... (1)

DonnarsHmr (230149) | more than 7 years ago | (#17784548)

Well, I find it hard to believe that GIMP can't do something that photoshop can [] .

Re:Yes Exactly! Only Backwards.... (2, Interesting)

tigersha (151319) | more than 7 years ago | (#17785172)

Ok here is a list to quell your doubts:

Photoshop has:

Adjustment layers which allow you to change filters after the fact
Filter layers which allow you to switch a stack of filter on and off and season to taste after tha fact (in CS3)
      These two features allow you to view image processing more like a spreadsheet in the same way that Excel is better than a calculator

Can do filters on the GPU in hardware (in CS3)
Save for web
Absolute color systems (Lab color)
Capability too do color proofing for printing presses (needs absolute color conversions)

Import and manipulation of smart object layers and changing them after the feact
Layer styles which allow you to change the llayerr after the fact. With copy and paste which are really useful to make lots imagges in the same style
16 and 32 bit and HDR color
Better macro recording (Gimp is probably easier too program though)
The history brush
The patch tool
MUCH better image size interpolations (if you resize an image iit look better in PS
A text tool that dooes preview on the image and not in some box outside of it
Much better text layers
A UI that was not written by the spawn of Satan

Basically, Photoshop is like a spreadsheet and GIMP is like a calculator. PS allows you to do do much better look and feel stuff

An no, the price comparison argument does not really hold. Gimp's competitor is Photoshop Elements which has all the features except the press stuff.

Re:Yes Exactly! Only Backwards.... (1)

r00t (33219) | more than 7 years ago | (#17787572)

The scaling just got fixed. It'll be in the next release AFAIK.

Re:Yes Exactly! Only Backwards.... (1)

Lehk228 (705449) | more than 7 years ago | (#17787858)

Gimp allows you to see your text on the image as you type.

the GIMP interface is better than photoshop unless you have learned photoshop first.
the price comparison does hold because by being free GIMP can be acquired trivially on any internet connected computer. whatever version of photoshop you use, if you are visiting someone across the country and need/want to do some image work you are SOL if you depend on photoshop.

also the vast majority of photoshop users are criminals.

Re:Yes Exactly! Only Backwards.... (1)

Phisbut (761268) | more than 7 years ago | (#17788524)

also the vast majority of photoshop users are criminals.

Over here on /. we call them "copyright infringers", not "criminals". They're civil offensers.

Re:Yes Exactly! Only Backwards.... (1)

Mr Z (6791) | more than 7 years ago | (#17790668)

Not after the DMCA. Circumventing an access control is a criminal act under the DMCA. Using a hacked key or similar is sufficient to meet that threshold.

Re:Yes Exactly! Only Backwards.... (1)

Goaway (82658) | more than 7 years ago | (#17787038)

Well, I find it hard to believe that GIMP can't do something that photoshop can.

Wow. You don't get out much, do you? So to speak?

Re:Yes Exactly! Only Backwards.... (1)

budgenator (254554) | more than 7 years ago | (#17785446)

Take a look at cinepaint [] it's running 8, 16 and 32 bit color channels and Kodak Cineon CIN, Digital Picture Exchange DPX and OpenEXR, EXR files formats

Re:Yes Exactly! Only Backwards.... (0)

Anonymous Coward | more than 7 years ago | (#17784502)

Or it could be JPEG artifacts.

Sure Can. (3, Informative)

philibob (132105) | more than 7 years ago | (#17784178)

You already can. Some cameras let you shoot against a blank white area to compensate for dust particles on the CCD. It's called "Dust Reference" in Nikon Capture, which works with most of their DSLRs.

Astronomers do this all the time (0)

Anonymous Coward | more than 7 years ago | (#17784180)

The idea is to take a photo of a perfect white background and then again with the lens cap on (to catch hot pixels).

Another thing not commonly known is that the CCD imperfections vary based on temperature. You want to keep the sensor as cool as possible.

Something like this? (3, Informative)

Quixotic137 (26461) | more than 7 years ago | (#17784182)

yes.. (2, Informative)

slashkitty (21637) | more than 7 years ago | (#17784188)

I don't know about your particular problem, but other camera flaws have been fixed with processing. For example, if your camera adds a vignette, you shoot a piece of white paper, then remove that shading from all the photos. This is gives you an automatic, scriptable way to do that with ImageMagick:

Vignettation Removal []

Re:yes.. (2, Informative)

YttriumOxide (837412) | more than 7 years ago | (#17785558)

Note to anyone that plans on doing this - good digital SLRs have this kind of function built in and you should only consider this if your camera doesn't. The quality of the adjustment will actually be significantly worse unless you ensure:

  1. The light is hitting the white paper evenly
  2. The white paper is a nice bright and clean white (don't even THINK of using standard office copy paper)
  3. The paper is of a very short grain
  4. There is no curving or folding in the paper
It might be better to consider the alternative black reference - while a good system would use both, a system which has neither is easier to do with a black reference than a white one.

(disclaimer: I don't know much about photography, but I know a LOT about paper, colour theory and image editing)

Re:yes.. (1)

r00t (33219) | more than 7 years ago | (#17787596)

The camera is actually crap as far as this goes.

The problem will vary with zoom and apreture, and maybe even with focus. If a flash is involved (yuck), you have to deal with even worse problems from that.

Not quite... (3, Informative)

Joe Decker (3806) | more than 7 years ago | (#17784210)

Discounting truly bad pixels, variations in the sensor readings on an even sky have two sources--pure sampling noise from the fact that the sensor is only reading a finite number of pixels, and a more constant, but still varying per-pixel offset. It's likely with a daylight shot that you're primarily seeing the former, the latter effects tend to be more significant during long exposures doing astrophotography. Check out the "Digital Rebel" astrophotography page here, [] it outlines a procedure for measuring and subtracting off this varying per-pixel offset, but notices you need to essentially compute the "dark frame" (or offset) for a particular set of conditions (temperature, ISO, exposure time). That subtraction could be done in PS, but again, you really need a new "dark frame" for each shot.

It is possible to smooth rough skies and such in Photoshop, I can't speak from personal experience with the GIMP but I'd expect something similar would work. I'd take the image, duplicate a regular (non-adjustment) layer on top of the main image, call that second one "smoothed"), blur it (Gaussian blur, fiddle with the radius to keep the effect gentle), add a layer mask to "smoothed" and paint it so that it only targets the sky in a shot. You may end up finding that you want to leave a little noise in the resulting image to avoid posterization, [] if your results are too smooth you can always adjust the opacity of the smoothed layer downward.

Take some photos with the lens cover on... (1)

lorian69 (150342) | more than 7 years ago | (#17784214)

Though I've never tried it myself, I have heard that you can take long-exposure photos with the lens cap on to reveal any consistent noise in your camera and filter it out. By using a long exposure, you can highlight which pixels are brighter than others, then use that image to mask out the same noise in your other photos.

Clear blue sky != monochromatic (2, Insightful)

Gothmolly (148874) | more than 7 years ago | (#17784228)

a) If you are using anything above ISO50 on a cheap digital (like yours), you will get ISO noise
b) blue sky is not really blue, you can't expect 7.1 million pixels to all agree
c) there may have been microscopic dust on your lens

Basically, you're looking for your camera to be Adobe Illustrator, and it isn't.

Yes (3, Informative)

YGingras (605709) | more than 7 years ago | (#17784230)

Scale down the picture, choose cubic interpolation and you're done. You can't fix the original, the information is scrambled already, but you can use the information of the larger image to average the pixels of the small image to get something clean. When you read X mega-pixels, you should know that this is a scam. There are no camera out there that will give you an image usable at X resolution but you can still have pretty pictures at X/2 (which is roughly 3/4 of the side on the original).

Not only those (1)

shirizaki (994008) | more than 7 years ago | (#17784234)

If you're using windows you can download the .NET framework and grab ( I have it and it's great if you're familiar with MS paint. That with the ability to add different plugins makes it a wonderful free programs like GIMP.

Re:Not only those (2, Informative)

YttriumOxide (837412) | more than 7 years ago | (#17785612)

Paint.NET is really great for those who need a quick and dirty image editor with a lot more power than MSPaint. However be careful - on most systems, it's a SERIOUS resource hog when dealing with large images (such as the 8 megapixel images from my camera). I find Paint.NET is great for anything that fits on my screen without scaling to less than about 50%, but go above that and my poor little work laptop (Dell Latitude D510 - 512MB RAM, 1.73GHz Pentium-M) will choke and die with lots of swap file use. Photoshop (CS) and the GIMP on the other hand hardly run like a dream, but they both deal with large images in a much nicer way.

I assume the problem is pretty much entirely RAM related and if you throw a decent amount of RAM at it, you'll be able to work with much larger images, but you'll quickly find you do have a very definite threshold and it'd be wise to avoid going above that.

Something similar (4, Informative)

ceoyoyo (59147) | more than 7 years ago | (#17784236)

Something similar is done in astrophotography. There are two kinds of fields you can remove from your images. A dark frame (taken with the lens cap on) is subtracted to remove things like pattern noise, hot pixels and amp glow that appears in images. A flat frame is then used to remove multiplicative effects, like vignetting and dust spots. Acquiring a flat frame can be tricky. One of the best ways is to use a translucent lens cap and a fairly bright light that provides a fairly uniform illumination.

However, the effects (unless there's something seriously wrong with your camera) are really only noticeable for long exposures.

Re:Something similar (2, Interesting)

dougmc (70836) | more than 7 years ago | (#17784770)

One of the best ways is to use a translucent lens cap and a fairly bright light that provides a fairly uniform illumination.
We used to just point the telescope up during the day and take a picture of a nice blue sky -- it worked very well. (Of course, this was 15 years ago, and maybe things have changed somewhat and there are better ways to do it now.)

Re:Something similar (2, Interesting)

gsn (989808) | more than 7 years ago | (#17785104)

Still do skyflats :-) but it does depends on what passband you care about for imaging - twilight sky flats work pretty well in B. These are sort of bothersome on larger telescopes because you don't want to saturate but you do want good statistics but you don't want to cut into observing time, and you have to slew between each one to reject any bright early rising stars. A lot of big telescopes use quartz lamps to illuminate a screen and image that. Dome flats are pretty common these days, especially in spectroscopy, but for photometry its nice to still get a set of sky flats. I take a bunch of flats for each instrument setup and median them before flat fielding. There are more sophisticated methods around the corner that will vastly improve calibration for projects like Pan-STARRS and later LSST - [] (Disclosure: I work with some of the people on said paper but not on this project)

Dark frames aren't actually as useful anymore for instruments on larger telescopes that use LN2 or a cryotiger for cooling.

Yes (4, Informative)

Ankh (19084) | more than 7 years ago | (#17784242)

First, the biggest improvement you are likely to see in the Gimp is if you go to Colour->Layers (in older versions of the Gimp it's Layers->Colours->Levels) and click Auto. For pictures that should contain some black and some white this will usually make a noticeable improvement.

Second, yes, Canon (for example) includes (Windows only, proprietary, secret, closed-source) software to compensate: you shoot a 25% grey surface. You can also use this inside the camera itself: there it will use the data for white balance correction.

In practice, though, it's fairly hard to do this yourself. One difficulty is that the amount and position of colour aberrationswill probably vary depending on the lens you use, or, with a fixed lens, the amount of zoom and the aperture size. I know I found that when my Casio developed some dark spots.

There are some programs that are used with hugin, the panorama stitching UI, that help with some lens corrections; it might be you could ask those people. However, a lot of the variation you are seeing is likely to be digital noise. Try taking 3 shots usinga tripod and timer or remote, and comparing them.

It's called "noise reduction" (1)

JeremyR (6924) | more than 7 years ago | (#17784292)

Others have alluded to it already, but what you're asking for sounds exactly like noise reduction. And there's plenty of software out there that does that. The problem with noise reduction is that it reduces fine detail as well. (Although some software does a respectable job, it can't perform miracles.)

If you're concerned about noise, what nobody has pointed out yet is that you may want to consider a camera with fewer pixels, a physically larger sensor, or both. Cramming 7 million photosites on a tiny 1/2.5" sensor (yes, they are measured in a strange way--that translates to somewhere around 5.8 x 4.3mm) is a sure recipe for noise. However, there doesn't seem to be any slowing down to the megapixel race, as Sharp has just announced an eight megapixel sensor in this size.

Or, you just might be satisfied with the images you can get from your camera. As you've noticed, "actual photos" (which I assume to mean prints) still look good despite the noise clearly visible when viewing at actual pixels. Printing, especially at modest sizes (e.g. 4 x 6"), has a way of smoothing out the noise, so if you're not a "pixel peeper" you may never notice. :-)


7MP on 1/2.5" is indeed crap (1)

r00t (33219) | more than 7 years ago | (#17787664)

I got only 6.3 MP (6.0 MP active, 6.1 MP claimed) on
a 23.5x15.7 mm chip. Your example was 7 MP on 5.8x4.3 mm.

Going by the 6.3 MP figure...

Mine is thus 58.56 square micrometers per pixel.
You example is only 3.56 square micrometers per pixel.

That is a factor of 16.44 difference.

Uniform pixel sensitivity (2, Informative)

Technician (215283) | more than 7 years ago | (#17784336)

Pixels are not identical in their dark current and light sensitivity.

For information on correcting these issues which compound in long exposures, find a good astronomy photographers forum. They discuss taking long exposures of various times with the camera capped to identify bright (high dark current) pixels. They use these corrections in their star shots of the same exposure time to subtract out the brightness caused by high dark current pixels. In bright scenes the same thing can be done to correct for low sensitivity (low bright current) pixels. A way out of focus shot of a white screen with primary color filters or lighthing should be able to give you some good sensor correction factor data. Remember that the errors are temprature sensitive so a full correction may be hard to get.

Good thing you don't shoot with film (4, Insightful)

Overzeetop (214511) | more than 7 years ago | (#17784346)

You'd be just devastated if you blew a film image up to the level where you could see the grain.

Here are two questions for you:

1) Do you find that you are printing your images at sizes larger than 12x18?

If you are, then you probably ought to have more pixels (i.e., a better camera). I'm okay with digital pictures down to about 150dpi, others swear that you need 300+. Then again, there are people who swear that $3000 unobtainium coated silver strands wrapped in virgin PTFE and assembled when the planets are in alignement make their music sound better.

2) Presuming you are actually printing at at least 200dpi, can you really see the difference without a loupe on the final prints? I'm not worried about your monitor, because I'm going to bet that if you have a consumer-level camera, you're not doing photoediting on a 7.1MP monitor.

You see, if you can't tell, don't worry about it. Let your geek side go and spend more time in the field and less time in the darkroom. Seriously - unless you have significant image problems you can see in your final output, the camera and imaging is good enough. Go take some great pictures, and worry a bit less about having digitally perfect pixels.

No, it's not your observation (1)

Flying pig (925874) | more than 7 years ago | (#17784820)

Responding to your sig, it's not that people are particularly stupid, it's that the world is getting more complicated faster than most people can keep up. Also technology is getting democratised. "Audiophilia" that you mention arises because the scientifically illiterate want to seem more capable and knowledgeable around audio equipment than they actually are, and there are more people than before who can afford this stuff. Once upon a time the average middle class joe could afford one hobby, so he became pretty knowledgeable about audio or photography or golf or malt whiskey or whatever it was. Now he can afford the equipment for just about everything but he does not have the brain power to learn them all in depth.

It's noticeable that the tossers who ten years or so ago believed that buying a Leica or a Contax would suddenly make them great photographers (but not, somehow, learning to print which is hard work) now get together and boast about their megapixel count, monitor size and Photoshop add ons, but the pictures are still crap. Understanding CCDs is about as difficult as understanding the different grain types of different halide films, and they haven't the interest to do that. When I realised I was never going to be a great photographer I bought myself a small, simple, and so far very reliable Pentax digital and stopped worrying. I find that ceasing to worry about CCD behaviour (I did a lot of work on CCD imaging some years ago) doesn't make my photos any worse. A bit of colour balance, a bit of saturation control, a bit of scaling and getting the right DPI for the printer - all the average person really needs.

Almost totally OT: Response to a response to a sig (1, Insightful)

YttriumOxide (837412) | more than 7 years ago | (#17785774)

Your post was a little offtopic; and now mine is WAY offtopic, but I have to respond. Hopefully the mods will look kindly and my "Offtopic" mods will equal my "insightful" for a break-even ;)

I disagree with your basic premise here completely. Everything you say about KNOWLEDGE is correct, but that doesn't address stupidity, which "Overzeetop"'s sig is about. There are indeed many more stupid people in the world than there used to be, and I put it down to many factors - a noticeable one that is different in today's society compared to the recent past being personal responsibility.

First let me define intelligence (and therefore also stupidity): The very definition of intelligence is debated and in some people's definition does include such things as knowledge. But if we're going for a "purist" definition, then it boils down to "the ability to figure stuff out" (reasoning). Naturally, those WITH more knowledge are likely to be more intelligent, and those who are intelligent will likely gain more knowledge, however knowledge itself is not a factor in the definition of intelligence itself.

Now, back to personal responsibility: Once upon a time, if someone did something stupid, they'd suffer the consequences for it. These days, they can blame others for their own mistakes. Because of this, they generally don't learn "the hard lessons" and will continue to do stupid things. So we can see from this that personal responsibility has a direct effect on learned intelligence. Now, there is also a direct effect the other way as well - those who are intelligent are less likely to blame others when they do something stupid once in a while, and they will learn "the hard lesson" from it. I put this down to innate intelligence (be it genetic or learned at a young age, that's a debate I won't discuss here).

Other factors which I'll mention, but not go in to such great depth on, include: less practical and more faith based adherence to religious ideals (somewhat related to personal responsibility); less importance placed on intelligence in many modern education systems (it's okay to be stupid; we'll teach you how to get by as you are); and games that don't include as much critical thinking in order to win (games of chance or reaction vs games of skill (I'm thinking mainly of non-computer games here, but it does apply to both)).

To try and save my "on-topicness" a little, I'll just say I agree completely with your analysis of people's lack of desire to learn about the things they need to know in order to be good at what they want to - they want an "easy fix". Some of this may actually fall back to my definition of stupidity above, but it probably falls more back to sheer laziness, which is closely related to stupidity and has many of the same factors, but I'd class as a mostly independent phenomenon.

Re:Good thing you don't shoot with film (2, Informative)

Andy Dodd (701) | more than 7 years ago | (#17786736)

"If you are, then you probably ought to have more pixels (i.e., a better camera). I'm okay with digital pictures down to about 150dpi, others swear that you need 300+. Then again, there are people who swear that $3000 unobtainium coated silver strands wrapped in virgin PTFE and assembled when the planets are in alignement make their music sound better."

More pixels is not necessarily better. More sensor area is usually more important.

This is why high-end DSLRs with only 4-5 megapixel resolution deliver better images than 7-8 megapixel consumer cameras - larger sensor elements result in higher signal to noise ratio at the sensor, which means less image noise. Considering the submitter's problem is image noise, a higher resolution camera with the same sensor size is NOT going to help them.

Re:Good thing you don't shoot with film (1)

Overzeetop (214511) | more than 7 years ago | (#17789896)

All true, but in this case, in order to get more pixels, he/she will have to go to a "better" camera - i.e. a DSLR. There aren't many cheap cameras with significantly more than 7 MP (haven't been in the market in the last few months, but most P&S go about 8 max, iirc).

I totally agree about the sensor area. It's one reason that my P&S was selected for sensor size/efficiency, but I know that a DSLR at 2 stops faster and the same pixel count will still produce better pictures.

Unfortunately, the OP has fallen into the young-technophile trap, thinking that he can fix a techincal shortcoming and get a better picture. This hasn't changed since before I was born - the quality of the photograph depends more on the person behind the camera than the camera itself. You can produce great pictures with a mediocre camera, but you will never get a consistently great pictures from a mediocre photographer.

Personally, I'm a Nikon guy through-and-through, but given the cash to go get a high end DSLR, I'd get the Canon. They're the only company producing a 24x36mm sensor (i.e. 35mm), and I think this will always give them an edge. It's a shame, too, since I've got a small fortune in Nikon glass. Then again, what good is a fabulous wide angle F mount lens if you end up with a normal FL on that snazzy new DSLR.

How to Improve Any Photo with Photoshop (1)

superdan2k (135614) | more than 7 years ago | (#17784496)

1. Open image file.
2. Duplicate layer.
3. Select the subject of your photo using the lasso tool. It doesn't need to be perfect, just outline it.
4. Go to Select -> Feather. Give it about 30px, when it asks.
5. Go to Layer - >New -> Layer Via Copy.
6. Go to the second layer, this one should be called "Background copy"...or whatever you renamed it.
7. Go to Filter -> Blur -> Gaussian Blur, and then blur that layer such that you can still make out shapes.
8. Save new image.

Yuck. I really, really hope you're joking. (1)

Glytch (4881) | more than 7 years ago | (#17784804)

This will simply make your subject look like a cardboard cutout. It's a half-decent gimmick if you're doing webpage design, but useless for real photography.

Folks, if you want to isolate a subject, use a lens with a larger opening, narrower field of view, or just get closer to your subject.

slight modification (1)

r00t (33219) | more than 7 years ago | (#17787702)

Instead of bluring, save as a minimum-quality JPEG and then load it again.

As long as you maintain alignment with the 8x8 JPEG compression blocks (possibly 8x16, 16x8, or 16x16 in the chroma channels) you'll get very little additional loss from subsequent recompression. The high-frequency information is simply gone.

Now the non-critical parts of the image will compress really well.

Re:Yuck. I really, really hope you're joking. (1)

stanmann (602645) | more than 7 years ago | (#17790988)

Folks, if you want to isolate a subject, use a lens with a larger opening, narrower field of view, or just get closer to your subject.
Please oh great guardian of wisdom, how do I get closer to the SKY?

Why do you care for photographs? (5, Insightful)

dlevitan (132062) | more than 7 years ago | (#17784556)

Yes, your $1000 digital camera is not going to have a perfect CCD. There is no such thing as a perfect CCD. And I don't understand why you care unless you're trying to do science work with it. Look at it this way, no one is ever going to look at your picture and say its horrible because one pixel is slightly different than the one next to it. You look at the content of the whole photograph, not three pixels.

If you are trying to do science, then a DSLR is not what you need. DSLRs use Bayer interpolation to create a color image. This inherently kills your accuracy since not every pixel in the image is actually a pixel on the camera. CCDs used for astronomy (which cost more than your whole camera) do not do this and they still suffer from the effects you mentioned. Every exposure used for scientific work goes through a whole data reduction process that tries to remove as much noise as possible. Others have mentioned most of the process (bias frames, dark frames, and flat fields), but most astronomical CCDs also have an overscan region which is part of the CCD that is not exposed to light and is used to record the thermal noise on the CCD. This changes from exposure to exposure and from temperature to temperature (and yes I am a researcher in astronomy).

In short, there's no reason for you to care about this, and there's no chance of fixing this completely (CCDs are not digital - they're analog). There's also no way of applying the same solution to every photograph (and CCDs can change over time). Don't worry about pixel-to-pixel variations and just take photographs for their content. If you're really interested in how CCDs work, read the Handbook of CCD Astronomy by Steve Howell. Its a great introduction to CCDs and how to use them for astronomy.

got any pictures of your wife? (0)

Anonymous Coward | more than 7 years ago | (#17784692)

oh wait, this is slashdot. any pictures of your mom's basement?

Some books to read... (1)

dudeX (78272) | more than 7 years ago | (#17784946)

I suggest reading this book for color management:
Color Management for Photographers: Hands on Techniques for Photoshop Users by Andrew Rodney.

The short gist is that you want to get a color calibrator like a Eye One Display II or a Colorvision Spyder2Pro to calibrate your monitor to a standard.

Second, you will want to get these two books for color correcting your images.
Photoshop LAB Color: The Canyon Conundrum and Other Adventures in the Most Powerful Colorspace by Dan Margulis
Professional Photoshop: The Classic Guide to Color Correction (5th Edition) by Dan Margulis.

These 3 books should be enough for a budding photographer to learn all the advanced techniques to get great results from your shots.

Another program you may want to look at is Bibble Labs RAW image editor. They have a Windows and Linux version.


old trick (1)

rmadmin (532701) | more than 7 years ago | (#17784994) ight_Spots_01.htm []

This should help. Its for long exposure shots but the same concept applies. Keep in mind that your camera sensor won't always show the same noise. So you'll probably end up doing this for every shot.

JPEG compression (-1, Offtopic)

denissmith (31123) | more than 7 years ago | (#17785018)

First, yes you can use GIMP and Photoshop to improve your images. Second, the problem you are noticing can't easily be fixed in either program. The pixellation you see is most likely a result of jpeg compression. Even jpeg fine can cause this. JPEG is lossy compression and it tries to optimize an image for shrinking. Many of the subtle shades in the sky were simply tossed out by the compressor. Use TIFF or RAW, as these do not use lossy compression. The files are much larger, but they don't use lossy routines (LZW is lossless compression, and RAW doesn't compress at all.). To try to fix the problem in Photoshop (GIMP will have similar abilities, but I don't use it so I can't instruct you) you need a good mask of the sky. Zoom in to 100% on an area of the sky ( so you have 1 pixel for each screen pixel). Use Filter | Blur | Gaussian blur and interactively blur the sky to get a smooth effect. Don't worry that it looks fake at this stage. Second use Filter | Noise | Add Noise. Take gaussian from the radio buttons at the bottom of the interactive tab that comes up. Move to a boundary of the mask so you can see the sky and the unblurred areas of the photo. Adjust the noise rate to approximate the noise in the original areas. This time it will look a little sharp, try for size approximation. Now choose Filter | Blur | Gaussian Blur again and this time select a very low number .3 or .4 pixels... adjust it to match back to the look of the original image. You will have to play around with this several times to get it right for your image - and believe me every image is different. AND NEVER USE JPEG. Save as Tiff, which is a general standard. Jpeg is for web use, it really ruins images.

Re:JPEG compression (1)

scdeimos (632778) | more than 7 years ago | (#17786418)

Use TIFF or RAW, as these do not use lossy compression. The files are much larger, but they don't use lossy routines (LZW is lossless compression, and RAW doesn't compress at all.).

If you're at all serious about digital photography, particularly if you're leaning towards scientific applications like astrophotography, I'd recommend giving TIFF the flick. TIFF supports many different compression schemes including LZW (lossless) and JPEG (lossy). A number of cameras I've seen supporting TIFF are actually using TIFF/JPEG because they can use the same CODEC for generating their JPG files. TIFF is also limited to 8-bits per channel. Stick with your camera's RAW format, or if it has it FITS (unlikely).

What you use to edit your pictures will directly affect their quality as well. Sure, GIMP is free and I like it, but it is also limited to 8-bits per channel and so is again useless for quasi-scientific stuff. Photoshop still reigns here, but there are other free applications coming out that support 16-bits per channel like Krita [] (which actually supports up to 32-bits per channel in some color spaces).

Re:JPEG compression (0)

Anonymous Coward | more than 7 years ago | (#17787286)

TIFF images actually support 16-bits. Higher-end scanners often save to 16 or 24 bit TIFF files which are editable in Photoshop and supported in some video compositing software as well. As for the original topic, if you are REALLY serious about image quality, forget the digital camera altogether. Get a medium or large format film camera and a good film scanner and learn to use those well. 645 format cameras can be had on eBay starting around US$300 for a decent outfit (body, lens, finder, film insert) and a decent consumer flatbed (I personally use a US$400 Epson 4990) will give you stunningly better results than any digital camera under US$10K. A 4x5 camera should run about the same used, and 4x5 transparencies are a thing of beauty. If you decide later to step up, professional digital backs are available for both medium and large-format bodies. Most of the work on my website [] is shot on medium format; I can print tack-sharp images poster size easily.

The best way to improve pictures (1)

spaceyhackerlady (462530) | more than 7 years ago | (#17785164)

The best way to improve your pictures is to learn something about composition and lighting. If the subject matter is good, you have a good picture.

You want better data? Get a better camera. Ditch that point-and-shoot for a DSLR, or even (gasp!) a film camera. My 50 year old Crown Graphic [] takes pictures that the very best DSLRs can only dream about.


50 year old Crown Graphic (1)

falconwolf (725481) | more than 7 years ago | (#17786052)

You want better data? Get a better camera. Ditch that point-and-shoot for a DSLR, or even (gasp!) a film camera. My 50 year old Crown Graphic [] takes pictures that the very best DSLRs can only dream about.

A view camera? What size is the film? I've been thinking of getting a 645 medium format with both film and digital backs. I think this size would be good for both large landscapes and photojournalism, I want to do both.


Re:50 year old Crown Graphic (1)

spaceyhackerlady (462530) | more than 7 years ago | (#17787862)

The old press cameras, the 1950s newspaper photographer cameras, are the ancestors of the modern field/technical cameras. They fold up in to a portable little box and take sheet film. They can do lots of view camera things, but they're not really view cameras. They were made in different models to take different size film. Mine, in particular, takes 4 by 5 inch film. Since an 8 by 10 print is only a 2x enlargement, you can just about get away with murder. :-)

Press cameras were intended to be used handheld, and I try to use mine handheld as much as possible. You can sit one on a tripod, open the back to access the focusing screen, and move the lens around like a view camera. The rear is fixed, so you lose a little flexibility. If you want a real view camera, this isn't it. If you want a seriously cool camera that takes great pictures, this is an attractive choice. It's helpful if you can do your own darkroom work.

I have a view camera, BTW, focusing hood and all. The camera design hasn't changed much since about 1885, though the film and optics have been developed and improved tremendously.

You mentioned medium format, and used medium format gear is cheap nowadays. While there are no digital backs for systems like Pentax 67 (yes, I have one of those, too), they also take breathtaking pictures. The larger negatives produce prints with a reach-in-and-touch-it quality that smaller negatives just don't have. I liken the Pentax 67 to photography's answer to a Humvee - it's big, heavy, black, ugly and noisy, but it gets the job done, every time.


photography (1)

falconwolf (725481) | more than 7 years ago | (#17791832)

It's helpful if you can do your own darkroom work.

I have worked in darkrooms developing film and prints, however it's been too long since I have. There's a photographer association in the area, IFP Minneasota [] , I've been thinking of joining. It has classes and a darkroom I would be able to use after taking a darkroom orientation, which I'd need to take. Eventually I'd like to build and setup a darkroom in my basement

You mentioned medium format, and used medium format gear is cheap nowadays

I've looked at some used medum format gear sold by a local chain of photography shoppes, National Camera Exchange, or NatCam []

. If I get one, I'll definitely need to take a class at IFP Minneasota to learn to use it, all I've ever used are 35 mm slrs. I'd also like to get a dslr and take a class for it as well, however for now at least I'll stick to film.

I liken the Pentax 67 to photography's answer to a Humvee - it's big, heavy, black, ugly and noisy, but it gets the job done, every time.

It's funny but while I like large laptops, big screen real estate, I'd rather have smaller camera bodies. I'd like to take my camera with me when I go hiking, and be able to take hand held photos. Another thing I'd like to do is get a good telescope with a camera mount so I can photo the stars.


Re:The best way to improve pictures (1)

n3k5 (606163) | more than 7 years ago | (#17790004)

If the sensor of your digital camera matches the size of your film, and price is irrelevant, you can get one that at least matches the quality of the analog one. Pictures that the very best DSLRs can only dream about? No, definitely not. Gear that your wallet can only dream about? Yes. :-)

Take pictures on a cold day... (1)

tucara (812321) | more than 7 years ago | (#17785224)

I've done a lot of work with scientific grade CCDs and like other people are pointing out, there are unavoidable limits to the noise in your image. For a $50,000 scientific grade CCD, you are able to be cooled via solidstate (peltier) cooling down to around -50 degC, Using a LN2 dewar based unit, down to 77+ Kelvin. The rule of thumb is that dark current (thermal noise) reduces by a factor of 2 for every 5 degree (Kelvin) drop in the temperature of the CCD. The LN2 cooled cameras are how astro people get decent signal from very long (hours) exposures. So, take a picture of the sky on a very cold day in northern canada.

Selective Blur (1)

melikamp (631205) | more than 7 years ago | (#17785292)

If all else fails, why not use Gimp (or Photoshop) to fix your pictures? One of the tools that I use almost too much is the selective Gaussian blur. You could select the sky with the magic wand and apply it as many times as needed. Or, if you don't have any clouds, why not just blur it?

Re:Selective Blur (1)

Shabadage (1037824) | more than 7 years ago | (#17785808)

Fixing a clear sky is actually pretty easy as long as it's not TOO messed up, but it sounds like you've got a decent camera so I'll assume it's alright. This is GIMP BTW. Here's how I would do it, isolate those nasty splotches with the magic wand tool leaving about a 10 pixel brder around them. Copy them to a new layer. GBlur them a bit (I find usually anything larger than 3x3 pixels doesn't look to right.). Ok, now select the transparent area with the select by color tool, then invert it (So that the splotches are the things actually selected). From there, use the fade outline tool under the "Script-Fu/Selection/Fade Outline" menu (In the image window, NOT the tool window). Set the fade length (I don't know exactly what it's called, I'm doing this from memory) to slightly smaller than your border around the "dirty" pixels. This will blend the Patched Blur and the original image together gradually, as opposed to simply overlaying the original as I've seen explained here. This method works great on smallish patches, but tends to look pretty ug-go when you do it in a larger scale. On a side note, sometimes, simply blurring the entire sky and playing with the contrast levels a bit will net you some of the most beautiful sky/cloud scenes. Just don't blur too much.

and don't forget to clean your sensor occasionally (0)

Anonymous Coward | more than 7 years ago | (#17786034)

Lots of good advice, don't forget that sometimes you need to clean your sensor occasionally.

You can use a blower, or if you KNOW WHAT YOU'RE DOING, you can use a "PEC PAD" (lint-free optical cleaning cloth) and a few drops of some ultra-pure solvent like methanol (Eclipse brand for instance). Just put a drop on the cloth, wipe once in a uniform stroke, and throw it away.

To check for sensor dust: take a picture of the clear blue sky, about 2 stops overexposed, with the smallest possible aperture, and as misfocused as you can get it. Then load it into PS and run the unsharp mask with a fairly large radius. You'll see all the dust. You can't eliminate it, but you can reduce all the big chunks by blowing on and/or wiping the sensor.

This doesn't give you the non-uniform pixels you're seeing, but you might be surprised at just how filthy your sensor is if you've owned your camera more than a few months.

(Assuming you have an SLR where you can get to the sensor, of course)

This is done in astronomy already (0)

Anonymous Coward | more than 7 years ago | (#17786054)

Yes something like that "should" work. I had some time on the scope on top of Van Allen Hall in Iowa City when I was in college, and before any space photos are taken, a reference photo or two are taken. This allows to adjust for oversensitive and dead pixels (well, it can't "fix" the dead pixel but I then know it's there..). I think it also allowed to adjust for temperature variaton across the CCD (but this won't help for your camera -- the temp would vary too much just moving the camera from one hand to the other 8-).

          How does this transfer to a camera? I don't know. I think with the telescope CCD the references were essentially taken with the lens cap on, but I don't think a camera CCD will generate a useful reference photo in the dark (my el-cheapo digital camera, you can amplify a photo taken in the dark and you just end up with a bunch of noise lines from the camera electronics..), and in the light it's not a reference any more.

You don't want all your pixels to be identical (1)

OldBus (596183) | more than 7 years ago | (#17786150)

As many have already pointed out, you don't want all your pixels to be perfectly the same when you take a picture of the sky. A certain level of 'noise' (or 'grain' in film photography) is part of the picture. There are circumstances where too much noise can be annoying and it is possible to have faulty/dusty CCDs - but it doesn't sound like that is your problem. Both the GIMP and Photoshop have cloning tools etc that allow you to copy existing areas. This includes the noise from wherever you get the source and is essential for making the new stuff blend in properly.

To see why pictures would look terrible without this natural variation, try taking a picture with a lot of 'uniform' sky and some land. Print one out as it comes and on the other, replace the sky with a 'perfect' blue where all the pixels are the same. Compare the two and decide which you prefer.

Books (1)

metalhed77 (250273) | more than 7 years ago | (#17786448)

I'm a bit baffled by your question. If I were you I'd pick up Professional Photoshop by Dan Margulis. It's the best color correction book out there. He'll answer your questions. As for what you said about the sky, cameras are different than the human eye, and with a wide angle lens the sky SHOULD have a variation in color. You can fix this with a mask. Refer to Margulis to see when this is appropriate and how it should be done.

xnview is a nice free alternative to CS2 (2)

humil8d (721311) | more than 7 years ago | (#17786524)

Yes (1)

metalhed77 (250273) | more than 7 years ago | (#17786532)

There is software that does this already. What you want is Noise Ninja and DxO. Noise ninja can build a custom noise profile for your camera, and DxO can correct standard errors with lenses. DxO might not be available for your consumer camera.

Noise Ninja will compensate for the parts of your sensor that are naturally, always, noisy. DxO will correct vignetting and distortion from lenses.

Canon has software like that.... (1)

FunkyELF (609131) | more than 7 years ago | (#17786770)

Canon has some software where you can calibrate your camera by taking a picture of a white screen....that is mainly for SLRs where you can get dust on the sensor or somewhere between the lens and the sensor.

I think what you want is something that removes what is called noise. For that I would use neatimage, noise ninja or GREYCstoration.

This needs emphasis (1)

rmrfstar (854542) | more than 7 years ago | (#17787154)

If you want high quality digital pictures, you must get a better digital camera. A point and shoot (such as a Olympus 7.1MP) will produce significantly lower image quality than a DSLR. You may be surprised, but a lower MP DLSR will take better pictures than a higher MP point and shoot.

Check out the reviews and especially the sample photos of the following:
Olympus E-330 Point and Shoot (7.4MP) [] Like to yours
Nikon D50 DSLR (6MP) []
Canon XTi DSLR (10MP) []

Two possible issues (0)

Anonymous Coward | more than 7 years ago | (#17787410)

There are three possible issues I can think of.

One, you might be taking pictures with the ISO (amplification) set too high. At extreme levels, this increases noise to easily visible levels (ever seen a picture taken by a camera phone?), but at modest settings a high quality digital SLR sensor (APS-C size or greater, or Olympus's 4/3 size) will provide very high quality at reasonable sizes.

Two, you're up against a fundamental limitation of quantum mechanics. Camera sensors work by essentially counting photons; the number of photons that accumulate at a given pixel give the brightness. Even though at a macroscopic scale this might be incredibly smooth on average, photon emission (and the photoelectric effect) is still a statistical process. The contribution from these effects should be minor on a truly uniform background, though, especially compared to thermal noise and process variation of the sensor itself.

Three, your camera will do its best to take exposures in such a way as to maximize contrast. Obviously, if you completely blow out the exposure, you'll get a swell, even (255, 255, 255) across the whole picture. But that's extremely useless.

In any case, a 7 megapixel image was never meant to be viewed at more than about 8x10. 4x6 prints will be very high quality, with the noise disappearing below the visual threshold. If you're filling your average monitor, you're already zoomed in more than you should be. If you're zooming in at 100%, in many cases you're looking at an image many times larger than it's intended to print.

Noise (entropy) is an inescapable fact of life, and it can't be gotten rid of short of cooling your camera down to absolute zero. (Not recommended, even for astrophotography buffs. Leave those particular extremes to the professionals.) For everyday photography, there's absolutely no need to go to extensive lengths to try and eliminate noise, because noise is random by definition. If there's no bias, you can't predict it, and thus can't remove it. If there were a 100% surefire way of removing noise, it'd be put on a chip and stuffed into cameras with truly horrible noise response to turn them into perfect cameras. It's a physically intractable problem.

no (1)

n3k5 (606163) | more than 7 years ago | (#17789772)

No, it can't be done; the artifacts you want to eliminate aren't so consistent that you could prepare a canned antidote beforehand. There's special anti-noise software that works on RAW images and can make use of calibration shots. GIMP and Photoshop are best suited to laborious manual retouching, but of course you can integrate specialised anti-noise software into your Photoshop/GIMP workflow. (I'm not recommending a particular product because I don't have personal experience with such tools.) Either way, you're mostly enhancing the visual impression of clarity, not actually recovering information that was lost already. Some cameras have a mode that takes a second exposure with a closed shutter, which is mostly used to find hot pixels and correct the previous exposure. There are some problems that you can map out beforehand and generate data that's practically good forever. Lens distortion can be corrected in such a way, also sensor response curves and chromatic abberation. (Not that I'd recommend vanilla Photoshop or GIMP for that, but some of the specialised tools for that might be available as plug-ins.) However, what you're talking about is basically noise, which can't be eliminated like that. All you can do is gather some statistical data about it and use that to guide an anti-noise filter; it'll have to come up with a unique solution for each individual exposure though.

I wouldn't use GIMP for any serious photo editing (1)

MotorMachineMercenar (124135) | more than 7 years ago | (#17789928)

it lacks 16-bit support and color management. Most people won't need 16-bit support but if you plan on printing your photos or need to do drastic adjustments it's a must. And without color management your photos will look very different on other people's monitors or printers.

And let's not forget the atrocious printing with GIMP, compounded with both matters above.

There's a reason why PhotoShop is the most asked-for Linux application.

Cosmic Rays (0)

Anonymous Coward | more than 7 years ago | (#17789954)

The noise on even the highest quality CCDs will vary from image to image every time. There is no way to predict when and where a cosmic ray will hit your CCD. In spectral applications you can only identify them reliably by manual inspection and remove the spikes one by one manually, replacing them with surrounding background or signal if present. Software can be used to remove them, but it is about as effective and accurate as click and pop removal from audio tracks, which leaves a lot to be desired.

Colour (0)

Anonymous Coward | more than 7 years ago | (#17791140)

shooting a white surface might not be enough. The errors might affect different colour pixel (red, green or blue) differently.
Load More Comments
Slashdot Login

Need an Account?

Forgot your password?