×

Welcome to the Slashdot Beta site -- learn more here. Use the link in the footer or click here to return to the Classic version of Slashdot.

Thank you!

Before you choose to head back to the Classic look of the site, we'd appreciate it if you share your thoughts on the Beta; your feedback is what drives our ongoing development.

Beta is different and we value you taking the time to try it out. Please take a look at the changes we've made in Beta and  learn more about it. Thanks for reading, and for making the site better!

Kodak Unveils Brighter CMOS Color Filters

Zonk posted more than 6 years ago | from the seeing-you-in-all-the-old-familiar-places dept.

Media 184

brownsteve writes "Eastman Kodak Co. has unveiled what it says are 'next-generation color filter patterns' designed to more than double the light sensitivity of CMOS or CCD image sensors used in camera phones or digital still cameras. The new color filter system is a departure from the widely used standard Bayer pattern — an arrangement of red, green and blue pixels — also created by Kodak. While building on the Bayer pattern, the new technology adds a 'fourth pixel, which has no pigment on top,' said Michael DeLuca, market segment manager responsible for image sensor solutions at Eastman Kodak. Such 'transparent' pixels — sensitive to all visible wavelengths — are designed to absorb light. DeLuca claimed the invention is 'the next milestone' in digital photography, likening its significance to ISO 400 color film introduced in the mid-1980's."

cancel ×
This is a preview of your comment

No Comment Title Entered

Anonymous Coward 1 minute ago

No Comment Entered

184 comments

Sacrifices color resolution: is it worth it? (4, Insightful)

chennes (263526) | more than 6 years ago | (#19519059)

Of course, you achieve this increased light sensitivity at the expense of losing 1/4 of your color resolution. Maybe if you want the increased sensitivity it might make more sense to pick up something like the Canon 1D Mk III, which, at least according to Ken Rockwell, gives great results all the way up to ISO 6400. I'd hate to lose 1/4 of my color resolution *all of the time* to get the added sensitivity that I only need for a small fraction of the shots I take.

Re:Sacrifices color resolution: is it worth it? (4, Informative)

lurker412 (706164) | more than 6 years ago | (#19519155)

I'm not sure you would lose "color resolution" at all. The current RGB scheme combines color and luminosity. Under the new scheme, those could be separated, much the way LAB color space works. Potentially, this could give you a greater dynamic range, which would address the biggest weakness of current digital cameras. Of course, the proof will be in the execution. If it yields more noise in the process, then it won't be worth a damn. We'll see.

Depends on the application (1)

grahamsz (150076) | more than 6 years ago | (#19519187)

I already feel that my digital rebels have remarkably low noise sensors and give me better results that shooting Velvia 50 and scanning. Still I usually carry a tripod and shoot at virtually never shoot at high ISO so it doesn't really affect me.

I expect this will have more value in cellphone cameras. Typically the noise floor goes up when the sensor shrinks, and increasing the brightness without increasing noise would be a massive boon for most cellphone photographers.

Probably not intended for SLRs (4, Insightful)

MonorailCat (1104823) | more than 6 years ago | (#19519193)

As you state, DSLRs already have fairly decent sensitivity, so this is not likely to be a good compromise for them.

Modern 'compact' digital cameras, however, which stuff 7-12 megapixels on 1/1.8" and 1/2.5" sensors (smaller than your fingernail) could benifit enormously from this. These sensors are already past the diffraction limit of most of the lenses, so a drop in color resolution may not be too damaging (the eye being less sensitive to color resolution, than luminance anyway). Kodak is claiming a 1-2 stop increase in sensitivity, which would be a great benefit to anyone using a compact inside, or in other poor light. (I have yet to own a camera that performs well above ISO 200)

As with all such tech announcements the proof is in the pudding, and until we can compare full size samples to conventional bayer sensors, its hard to tell if this is the next big thing or not.

Re:Sacrifices color resolution: is it worth it? (1)

MrFlibbs (945469) | more than 6 years ago | (#19519225)

Exactly. The potential loss in color resolution is a pretty steep price for two stops worth of sensitivity. There may be a niche market for this with sports or astro photos, but most users shoot most of their shots with available lighting or fill flash and don't need the extra sensitivity.

This might make a nice second camera for the serious user, but most folks would be better off with the current technology.

Re:Sacrifices color resolution: is it worth it? (4, Informative)

Animaether (411575) | more than 6 years ago | (#19519383)

You don't really lose a quarter of your color resolution... you lose half the resolution in a specific wavelength, the one normally corresponding to green (though how this is mapped to RG or GB (rarely purely G) is up to the demosaicing algorithm. On the up side, you gain light sensitivity by a factor more than two; assume the filters were perfect and light only existed in the wavelengths they let through. Then any single filtered cell only receives 33% of the stimulans. An unfiltered cell would get the full 100%.

This additional intensity resolution is, of course, only at a quarter of that of the resolution a full bayer... but nobody ever said you had to discard the intensity measured by the red/green/blue filtered bits; in fact, you can't, or you can't very well determine color at all.

It's actually a pretty obvious setup (it has likenings to the RGBe storage format.. though that has much larger range, it also mostly separates color (RGB) and intensity (exponent)) - can't wait to see it patented - and makes me wonder why the Bayer pattern was the choice in the first place. I certainly know why they picked green as the go-to channel (human visual sensitivity, blabla), and why the there have to be groups of 4 in the first place (cells are square/rectangular.. design a triangular sensor cell, somebody - quick! gimme that hexagonal sensor).. but why just now Kodak pops this up..

Re:Sacrifices color resolution: is it worth it? (1)

clodney (778910) | more than 6 years ago | (#19519857)

I had something of the same reaction to your remark about a patent. This is a fairly good example of how hard the "obviousness" test of a patent can be to judge. When you hear about this, it is something of a "Doh!" moment, and you think how obvious it is. (I was immediately reminded of the chroma subsampling options in JPEG compression, which use different sampling rates for color and luminosity).

But the fact is that hundreds of millions of digital cameras have been made in an intensely competitive R&D environment, and only now is this innovation coming up. That argues that it is indeed non-obvious, or someone would have come up with it sooner.

Yet if this becomes popular and patented, many people are going to think it was obvious all along.

Re:Sacrifices color resolution: is it worth it? (0)

Anonymous Coward | more than 6 years ago | (#19520201)

The question is, did it take large amounts of money to develop this idea? If the idea couldn't be patented, would the enormously competitive camera industry refuse to invest in the new more effective kind of sensors? The answer to both question is no, and thus the ability to patent something like this that is obvious in hindsight is not useful to society.

Re:Sacrifices color resolution: is it worth it? (1)

locofungus (179280) | more than 6 years ago | (#19520717)

Depends when the idea was originally.

I've certainly discussed this idea with people at least a year ago completely independent of any research done at Kodak. Four colour sensors, CYGM and RGBE, have been around for years.

Other ideas that have been played with are non regular (fractal) CFAs.

An obvious further extension to what Kodak has done (assuming it isn't what they have done) is to have something like RYYB (bayer but with G replaced with Luminance). This ought to capture still more light. Infact, as CCDs tend to be more sensitive towards the red end of the spectrum RYYG might be even better or even Yellow-Magenta-Luminance.

All of these add processing complexity (and might add too much noise).

Tim.

Re:Sacrifices color resolution: is it worth it? (1)

mikael (484) | more than 6 years ago | (#19520867)

According to this article [itwire.com.au], Kodak have added four new clear cells to the existing four cell Bayer pattern. Somehow this resolves to a 4x4 repeat pattern.

Yes it is (1)

asphaltjesus (978804) | more than 6 years ago | (#19519833)

For most photography applications, it is a meaningful advance for which there is no downside.

The marketing hype surrounding resolution just keeps spinning further away from reality.

Digital photographic prints off the average production photo printer (my costco has them right on the floor) the lines per milimeter resolution is _way_ below what even a **really** good digital SLR with **great** optics can capture.

Also keep in mind the color gamut of the average digital camera is quite narrow, and unsophisticated compared to analog. There are a number of segments of photography where film still rules the day because the results are more "cinematic" than digital.

So throwing out 3/4 of the color resolution still leaves you with extra data that will be thrown out when the data hits the paper. I can think of one or two exceptions, but they are way, way out of the norm.

A related anecdote, I recall the photos from the Mars rover were taken with a 1.5MP sensor and they made *gigantic* beautiful images.

Re:Sacrifices color resolution: is it worth it? (1)

mrbluze (1034940) | more than 6 years ago | (#19519839)

'd hate to lose 1/4 of my color resolution *all of the time* to get the added sensitivity that I only need for a small fraction of the shots I take.

To be honest, I wouldn't mind. If you buy a 10 megapixel camera that isn't a good quality SLR, you won't be getting much better quality than a 6 megapixel camera since the bottleneck for quality becomes the lens.


All it would really mean is that we absorb a delay in the relentless rise in pixel density for a dramatic improvement in colour depth.


This technology will sell, there's no doubt about it.

Re:Sacrifices color resolution: is it worth it? (1)

art6217 (757847) | more than 6 years ago | (#19519853)

That't not so simple. You lose the resolution of green, but increase the resolution of red and blue. For example, if there is only blue light, then the ccd matrix has half the resolution both vertically and horizontally. With a white pixel, algorithms mith guess that there is only blue, as red and green sensors do not get any light, and then use the white sensor to increase the resolution of blue. It's a simple case, but smart heuristic algorithms might get a lot in various ways from the white pixel, also to increase color resolution. Also, with the new high resolution CCD, the problem of resolution itself often gets less important.

Loss of color resolution is not that big a deal (4, Informative)

Solandri (704621) | more than 6 years ago | (#19520169)

It's done on TV [nfggames.com] all the time [nfggames.com] and nobody complains (chrominance is separated from luminance and often transmitted at much lower resolution). As has been pointed out below, your eyes are made up of rods (which see black and white) and cones (which see color), and only a fraction of those cones are devoted to each individual red, green, or blue spectrum. So your color resolution is already significantly lower than your luminance resolution. You can even see photos demonstrating this [nfggames.com] with a 9x decrease in color resolution (3x in each linear direction). You're most sensitive to green, which is why the Bayer sensors commonly used in digital cameras divide each 4 pixels into GRGB.

Re:Sacrifices color resolution: is it worth it? (2, Interesting)

Spy Hunter (317220) | more than 6 years ago | (#19520415)

Well considering that the human eye does much the same thing (rods vs cones), I'd say yes.

not hardly (1)

goombah99 (560566) | more than 6 years ago | (#19520739)

Actually the place you will lose more bits is not the use of the are (25%) but the faster shutter speed. if the camera can shoot two stops faster then you 75% of the light on the RGB detectors.

Now as for losing color resolution, I think you won't lose much. The only place you are going to notice it is in dim light and it will be less than 1 bit of loss. Those would be shots you would nt have gotten anyhow because they would have been below the camera's ability.

Prior art? LCD projectors do this same trick to brighten the projectors for presentations. rgb+white on the color wheels. This is also why some projectors, designed for movie viewing, are a littel dimmer for the same wattage because they leave out the white on the wheel for better color saturation and higher wheel speed.


Re:Sacrifices color resolution: is it worth it? (1)

Ed Avis (5917) | more than 6 years ago | (#19521217)

Still, the human eye is more sensitive to changes in light intensity (luminance) than to changes in colour (chrominance), so it may be worthwhile trading off some colour resolution that you won't notice for some light sensitivity that you will. Remember that with existing colour digital cameras you need software to interpolate and guess colours for pixels because of the alternating RGB pattern on the sensor. The guessing job won't be that much more difficult if there are a few clear pixels in there as well.

The proof is in the pudding (1)

2.7182 (819680) | more than 6 years ago | (#19519065)

It is hard to evaluate this from the press release. People have tried all sorts of variations, including ditching the whole pattern thing for true color (Carver Mead) and the results are about the same as other cameras.

Re:The proof is in the pudding (1)

cheesecake23 (1110663) | more than 6 years ago | (#19519769)

The proof may be in the pudding, but a quick look and sniff gives us some hints:

Compared to the standard Bayer sensor, 50% of single-color pixels are replaced by clear pixels, which see the whole RGB spectrum, so they are about 3 times more sensitive to light. So the whole array should be 0.5 + 0.5*3 = 2 times more sensitive to light, or one stop if we speak photographish.

Kodak claims 2x-4x increased sensitivity (1-2 stops) but it's hard to see where this "extra" increase would come from.

The cost is reduced color resolution, but this is relatively unimportant since the human eye mainly sees detail in terms of luminosity, not color. (Incidentally, this is the main insight behind the efficient compression of JPEGs.)

too little, too late? (0)

Anonymous Coward | more than 6 years ago | (#19519079)

too little too late from Kodak?

I dont know how this will play with the market leaders for DSLR (canon/nikon) or even for medium and large format back (phaseone) etc.

I cant really see Canon using Kodak sensors

Re:too little, too late? (1)

FunkyELF (609131) | more than 6 years ago | (#19519267)

I agree that its too little too late. To me...the bayer filter should be gone altogether since the foveon x3 sensor [foveon.com] came out. If other camera makers would use this technology the price would come down.

Re:too little, too late? (2, Informative)

Zarhan (415465) | more than 6 years ago | (#19519365)

Only problem is that Foveon (at least current implementation) is crap. The three colors have too much overlap and they also aren't very sensitive, either. Fine, you get rid of some of the bayer artifacts, but in return you lose most of the extreme colors and lots of sensitivity.

Cellphone cameras (1)

grahamsz (150076) | more than 6 years ago | (#19519763)

They need every bit of light they can get because the sensors are so small. Resolution and color depth aren't really a problem in that space, but brightness really is.

Re:Cellphone cameras (1)

Doctor Memory (6336) | more than 6 years ago | (#19520473)

Especially when you consider how many pictures are taken in bars and at parties and other low-light locations. If somebody's got a real camera, they typically have a decent flash and know when to use it. It's when you're trying to snag some cutie's pic to store along with her number in your phone that you have trouble.

This is horrible (-1, Offtopic)

Anonymous Coward | more than 6 years ago | (#19519085)

They will be using this transparent pixel as an alpha channel to put a stupid Kodak watermark on all your pictures.

Will (1)

Archades54 (925582) | more than 6 years ago | (#19519093)

Canon release the eos-30d equiv or eos-350d/400d equiv with this sensor within the next year? If so I'd wait to purchase :)

Re:Will (1)

SpinyNorman (33776) | more than 6 years ago | (#19520391)

Canon release the eos-30d equiv or eos-350d/400d equiv with this sensor within the next year? If so I'd wait to purchase :)

The article says that sensors based on this will start to become available early next year, but I'd guess it may be a little longer until camera manufactures have tuned their on-camera image processing algorithms (and off-camera RAW algorithms) for the production sensors.

The larger format sensor cameras like the EOS 30D/350D (both are APS-C) don't suffer so much in low light anyway since they already receive more light per pixel since the pixels are larger. This is the main reason why DSLRs do so much better in low light (other than having the option for expensive wide aperture lenses). It's really the point and shoots with their pathetically small image sensors that need this.

Re:Will (1)

Sparks23 (412116) | more than 6 years ago | (#19521501)

Even still, you can get a fair amount of noise if, say, you're doing concert photography or other no-flash-allowed low-light photography with a DSLR. My EOS 400D is a great camera, but it's a pain to shoot ISO 1600 with it unless I either have my f/1.8 wide-aperture lens on there or plan to spend a lot of time in post removing noise artifacts. My friend's EOS 5D, unsurprisingly, handles low-light shots far better than my 400D does, no doubt in part because of its own sensor being larger than that on the 400D.

So there's still room for improvement in DSLR sensors... though as to whether or not this is some magical miracle solution to low-light photography remains to be seen. :)

Re:Will (0)

Anonymous Coward | more than 6 years ago | (#19521287)

OMFG, this sounds like all the bitchy whine fest "photography" forums.

Equipment doesn't matter, period. A great photographer can take better pictures with a $4 disposable camera than a snap-shooter with a $10,000 camera ANY DAY.

Calling all patent trolls! (1, Insightful)

R2.0 (532027) | more than 6 years ago | (#19519125)

Kodak is going to patent this, and use it themselves and license it out to other companies (hard the story last night on NPR). For those who would abolish the patent system, why would this not be a "good" patent?

Please discuss.

Re:Calling all patent trolls! (1)

Looshi (1038712) | more than 6 years ago | (#19521253)

There's nothing wrong with this patent. This is a useful innovation and Kodak should be awarded for their R&D efforts.

It is when companies begin to patent software that the problems arise. You can't patent a recipe, how different is software code from a recipe? Both are list of instructions. That is where the patent system fails. When ebay has to defend their "Buy-It-Now" feature, which is nothing more than html and some server scripts, then we have a problem.

Fifth? (1)

tom17 (659054) | more than 6 years ago | (#19519129)

The summary says the extra pixel is a 4th, but surely it is a 5th. 2*Green 1*Red 1*Blue and then the new one.

Re:Fifth? (1)

Archades54 (925582) | more than 6 years ago | (#19519173)

Far as I can tell it's arranged in 4's, with only 1 green 1 red 1 blue 1 Transparent to detect all light and base the light intensity etc off that with the others.

Re:Fifth? (0)

Anonymous Coward | more than 6 years ago | (#19519209)

Evidently not, because they're dropping the bayer pattern in favour of this one. The second green is now going to be replaced by a clear pixel, designed to capture more light.

DPReview has a good explanation (2, Informative)

MonorailCat (1104823) | more than 6 years ago | (#19519363)

They posted a full press release with images and sensor layout diagrams, additionally there is an excellent discussion in their news forum with a lot of good information. http://www.dpreview.com/news/0706/07061401kodakhig hsens.asp [dpreview.com]

Re:DPReview has a good explanation (1)

tom17 (659054) | more than 6 years ago | (#19519645)

Yep, and according to the diagrams in there, you have 4 greens, 2 blues, 2 reds and 8 of the new ones in one 'pattern block'

i.e.
for every 2 greens you get 1 red, 1 blue and 4 of the new ones.

Re:DPReview has a good explanation (1)

Reece400 (584378) | more than 6 years ago | (#19519889)

While the red & blue pixels seem a bit spread out to me with the new pattern, I'm sure this isn't much of an issue with such high megapixel CCD's these days. From the examples they seem to be suited quite well for cellphones.

Thanks! (0)

Anonymous Coward | more than 6 years ago | (#19520689)

WhyTF is the Kodak press release showing a disc of random colors instead of the sensor layout?

Re:Fifth? (1)

altoz (653655) | more than 6 years ago | (#19520217)

That wouldn't work unless you have some sort of hex pattern. Bayer is based on a square pattern, so most likely, it's something like 1xRed, 1xGreen, 1xBlue, 1xClear per 4.

we had 400 speed reversal film in the 50s (3, Interesting)

swschrad (312009) | more than 6 years ago | (#19519167)

and color in the 70s.

I refer you to Tri-X b/w, and to Fujichrome 400 around 1972. a really nicely balanced and warm film. if you pushed it to 1200, you could peel the grains off the base and go bowling with them, but the picture held up remarkably well on the small screen. it was THE go-to magic film for 16mm newsfilm when it came out.

if that was a negative film, it would have been asa 800 with little more grain than the "fast" 125 color film of the time.

Re:we had 400 speed reversal film in the 50s (0)

Anonymous Coward | more than 6 years ago | (#19519495)

A little later example but one of the first commercially available slide file films with a speed greater than 400ASA.
I have a 20x24 print on my office wall that was cibachrome printed from a slide taken on Anschrome 500ASA slide film.
The picture is of Pink Floyd appearing at Brighton Dome (UK) on 20th Jan 1972 at the 'Scream' in Careful with that Axe Eugene and a magnesium flare went off behing the stage to illuminate the whole Auditorium.
The Camera was a Pracktika LTL with a 50mm F2.8 Lens.
So, the marketing droids have got it wrong again...

IMHO, the relative density of the CCD device pixels will need to increase to compensate for the inclusion of this 4th (or 5th) sensor without the filter. I guess that the full frame CCD cameras might be the first to use this new filter arrangement.

Will Fuji counter this with an new design of their Hexagonal sensor CCD? I wonder.
The takeup of this will depend upon their licensing deals.
I'm no supporter of Software Patents but where you spend lots of money in research like this then why shouldn't you reap some monetary rewards? At least it is not an 'obvious' patent like some that have been applied for (and granted) in the past two years.

(I use the term CCD as a generic term to cover CCD's and CMOS chips)

Re:we had 400 speed reversal film in the 50s (1)

leehwtsohg (618675) | more than 6 years ago | (#19519821)

Will Fuji counter this with an new design of their Hexagonal sensor CCD? I wonder.
Why is it called hexagonal? Every picture I see of the sensor seems octagonal. Hexagonal would be indeed better, and you wouldn't need to have 2 greens for 1 red and 1 blue. But pictures seem to indicate that fuji still has an extra green. Do they also have a real hexagonal design?

Re:we had 400 speed reversal film in the 50s (1)

Art Deco (529557) | more than 6 years ago | (#19521299)

I caught the bit about 400 speed color film in the mid-80's. I distinctly remember shooting Kodacolor 400 in '79. I looked it up and Kodacolor 400 came out in '77.

Not the point (1)

bill_mcgonigle (4333) | more than 6 years ago | (#19521457)

The point she was trying to make was that when it became available everywhere (pharmacies and five and dimes) at a similar price point to ASA 200 then there was mass adoption, and most peoples' snapshots gained quality. Sure, they picked up some grain and lost some saturation, but most people care about non-blurry and better exposure.

Transparent AND absorbs light? (2, Funny)

Burb (620144) | more than 6 years ago | (#19519207)

That's a neat trick. I wonder how they can do that?

Re:Transparent AND absorbs light? (0)

Anonymous Coward | more than 6 years ago | (#19519333)

That's a neat trick. I wonder how they can do that?

"Transparent" doesn't mean "completely invisible". Take a window that's been in the sun for a while, for example. The glass is transparent, but it's also hot due to light (energy) absorption.

Re:Transparent AND absorbs light? (1)

vondo (303621) | more than 6 years ago | (#19519339)

The pixel is not transparent, the filter on top of it is. If a sensor has 4M pixels, the current design has 1M of them with little red filters on them, 1M with little blue filters, and 2M with green (our eyes are most sensitive to green). This new design, as I understand it, just replaces half of the green filters with "clear" filters. The sensor underneath is sensitive to whatever light makes it through.

Re:Transparent AND absorbs light? (1)

tbfee (1115043) | more than 6 years ago | (#19520115)

The filter's not transparent either (if it was you wouldn't need a filter, right?) It's transparent in optical wavelengths. The sensor still has some sensitivity in IR that needs to be filtered out.

Re:Transparent AND absorbs light? (1)

Goaway (82658) | more than 6 years ago | (#19521667)

By not being literal-minded nerds, and by being able to understand meaning from context, probably.

Nothing too revolutionary (3, Interesting)

bsundhei (1053360) | more than 6 years ago | (#19519257)

This is really not anything new to the image industry, just a new application. There is already the CMYK colorspace for printers, which is effectively an RGB + black to get deeper colors. I don't see this as really revolutionary, as much as "Can't believe this hasn't been done yet." Though, at least they admitted this too :) My biggest hope for this is to reduce per pixel noise by being able to reference the fourth plane, but I doubt they will get there for a while, they still have to work out the color conversions.

Re:Nothing too revolutionary (1)

ProdigySim (817093) | more than 6 years ago | (#19519543)

I agree. CMYK is really just three colors with an extra factor just to adjust what is essentially brightness. Sounds like exactly what this extra pixel is doing. So they've brought CMYK to cameras. Finally. Real life isn't VGA computer monitors. RGB isn't really that advanced.

CMOS version of Rods and cones (5, Insightful)

G4from128k (686170) | more than 6 years ago | (#19519299)

Kodak has rediscovered what evolution found millions of years ago -- design a dual system such as the rods and cones of the biological eye. The average human eye has about 120 million sensitive, panchromatic rods and only 6 or 7 million color-sensitive cones (many in the central fovea). The brain merges the limited amounts of color information with the larger volume of B/W image data to paint color into the image that we think we see.

Re:CMOS version of Rods and cones (1)

imadork (226897) | more than 6 years ago | (#19519455)

Kodak has rediscovered what evolution found millions of years ago....

And I'll bet they've already filed a patent on it....

Re:CMOS version of Rods and cones (2, Funny)

Anonymous Coward | more than 6 years ago | (#19519513)

Kodak has rediscovered what God found six thousand years ago

Fixed that for you. : )

Re:CMOS version of Rods and cones (3, Interesting)

SpinyNorman (33776) | more than 6 years ago | (#19519961)

The old/current Bayer pattern (also a Kodak "invention") also reflects the lower resolution of our vision to color vs brightness (as does JPEG and YUV based image compression - UV can be downsampled compared to Y with little loss in perceived resolution). In the Bayer pattern each block of 2x2 pixels have 2 with green filters, described as luminance-sensitive in the original patent, and one each of red and blue filter described as chrominance sensitive.

The new Kodak filter pattern is still taking advantage of our better resolution for luminance, but is implementing it better by basing it on color filters (or the lack of them) that let more light through, thereby increasining signal-to-noise (especially needed in low-light conditions).

I'm not sure that this new filter pattern is optimal though. As another poster noted, R/G/B filters are too narrow and cut out a lot of light. You could still capture the color information with two broader filters more directly corresponding to the U & V of the YUV color space.

Sounds just like the new LCD display (1)

mbourgon (186257) | more than 6 years ago | (#19519347)

There was a story here a few days ago about them adding a "clear" pixel element to allow more light through. Sounds like the same premise.

why are sensors in RGB instead of CMY? (2, Interesting)

leehwtsohg (618675) | more than 6 years ago | (#19519351)

The gain here seems to come from the fact that they use a white sensor (i.e. unfiltered), which sees ~3 times more light.

They divide each sensor of the regular bayer pattern to 4, half white, half color. This way one can also report a 4-fold increase in the number of pixels, without really increasing the resolution. (which actually will be a boon for digital photography, since no one needs the current resolution anyway, because the optics doesn't keep up, but a megapixel race is on...)

But does anyone know why sensors use RGB and not CMY? a Cyan filter would let green and blue through, but keep red out, instead of blocking two parts of the visible spectrum for each pixel. This way, by simply switching color space, the camera becomes twice as sensitive to light. I.e. instead of

R G
G B
use

M C
C Y
or something like that. One could even combine the two methods, and use white pixels, to gain a slight further increase in light sensitivity (from 8/12 to 10/12). Is there any reason that current cameras use RGB?

Re:why are sensors in RGB instead of CMY? (5, Informative)

Anonymous Coward | more than 6 years ago | (#19519603)

CMYK filters were actually tried:

http://en.wikipedia.org/wiki/CYGM_filter [wikipedia.org]

They don't actually provide any practical benefit over RGB in terms of noise, if your final output is meant to be RGB, due to the mathematics of the color space transformation. And your final output is generally RGB, for digital photography; even if you print, the intermediate formats are generally RGB, and cheap consumer printers take input in RGB, not CMYK.

Re:why are sensors in RGB instead of CMY? (2, Interesting)

leehwtsohg (618675) | more than 6 years ago | (#19519757)

Thank you for the link! That is very interesting. So CMY was already tried in cameras. Once you have a digital pixel, it pretty much doesn't matter if you represent it in RGB or CMY - just a transform of the same information.
But I don't understand why you don't have less noise. The wikipedia article mentions higher dynamic range. Isn't it true that twice as much light falls on each sensor, so you gain a stop, and because of that have less noise (because you need the shutter open for only half the time)? Or is it somehow that when you get noise, it is in two channels, and thus you have the same amount of noise?

Re:why are sensors in RGB instead of CMY? (2, Interesting)

ChrisMaple (607946) | more than 6 years ago | (#19520235)

Useually random noise sums as "root sum of squares". So the signal level would double, the noise would increase by about 1.4X. The net improvement would be 2/1.4 = 1.4. The more complicated electronics would reduce the S/N improvement a bit more, so the net improvement would probably be in the range of 1/3 to 1/2 stop (1.25 to 1.4), I guess.

Re:why are sensors in RGB instead of CMY? (2, Interesting)

ringm000 (878375) | more than 6 years ago | (#19520565)

In a camera, you cannot convert CMY to RGB by just inverting the components. Even in ideal model like (C,M,Y)=(G+B,R+B,R+G) you have to convert like R=(M+Y-C)/2, increasing noise level by 50%. Absorption spectra of the cones overlap a lot, so this model is obviously unreachable, requiring complex color correction which would probably give imperfect results. However, these are all color-related problems, and the dynamic range of luminance should still be improved.

Re:why are sensors in RGB instead of CMY? (1)

Zarhan (415465) | more than 6 years ago | (#19519695)

Canon G1 had a CMY pattern if I recall correctly. This also meant that it didn't suffer from the nice IR artifact (take a picture of hot charcoal and you actually get reddish image, lots of other cameras see it as purple...)

Re:why are sensors in RGB instead of CMY? (2, Informative)

slagheap (734182) | more than 6 years ago | (#19520033)

But does anyone know why sensors use RGB and not CMY? a Cyan filter would let green and blue through, but keep red out, instead of blocking two parts of the visible spectrum for each pixel. This way, by simply switching color space, the camera becomes twice as sensitive to light.

Let me just turn that around for you...

A Green filter would let cyan and and yellow through, but keep Magenta out, instead of blocking two parts of the visible spectrum for each pixel.

The color spaces are complimentary. Each color in one space is halfway between two colors in the complimentary space.

___R___
_Y___M_
_G___B_
___C___

A filter of any color will, in one color-space allow one color and block the other two, while in the other color space allow two colors, and block one.

RGB is the color space usually used for additive color (i.e. light -- More/different light means brighter). A sensor is capturing light. CMY(K) is usually used in subtractive color (i.e. ink -- More/different ink means darker).

Re:why are sensors in RGB instead of CMY? (1)

dazilla (647166) | more than 6 years ago | (#19521581)

Actually, a green filter would only allow the green components of the cyan and yellow light through. Light always works in the additive space. leehwtsohg is right. You also shouldn't theoretically lose any color resolution (I think wiki's wrong on this one), since twice as much of each color is being sampled on each 2x2 pixel.

Re:why are sensors in RGB instead of CMY? (4, Interesting)

MasterC (70492) | more than 6 years ago | (#19520407)

This way, by simply switching color space, the camera becomes twice as sensitive to light. I.e. instead of ...
The issue is that the spectral density [wikipedia.org] of sunlight is not flat. (I can't seem to find a good image for you.) Basically, it peaks at about 500 nm (yellowish-green) and tapers off toward infrared and ultraviolet. The Bayer filter has twice as many green pixels as red or blue, which reflects the sunlight power spectral density more than having one cyan, one magenta, one yellow, and one intensity would. In other words, sunlight is more green than red and blue.

It is no coincidence (I suppose it's arguable if you call evolution a "theory" (with quotes)) that our eye is most sensitive to green light. :) Notice that of the three cone cells [wikipedia.org] in our eyes, two heavily favor (534 & 564 nm) the yellow-green end of the spectrum. IMHO, the ideal colors for a camera filter would match the three peaks in our cones which decently lines up with the sunlight PSD.

As a side note, the need for white balance on cameras is that spectral density for different light sources are not the same. Incandescents differ from fluorescents which differ from sunlight which is why incandescents have an orangeish tint and fluorescents have a blueish tint (that's where their frequencies have their peak power).

(The theory behind why chlorophyll is green (which means it reflects green and, thus, does not absorb the frequencies with the most power) are quiet interesting to boot.)

Re:why are sensors in RGB instead of CMY? (1)

localman (111171) | more than 6 years ago | (#19520619)

Not exactly what you're saying, but Canon did something like this in the late 90's:

    http://en.wikipedia.org/wiki/CYGM_filter [wikipedia.org]

The result was, as you say, better light sensitivity, but at the expense of color accuracy. I guess in the end they decided the tradeoff wasn't worth it. I don't claim to understand any of the details, but I just read that page and then read your question :)

Re:why are sensors in RGB instead of CMY? (1)

shmlco (594907) | more than 6 years ago | (#19521553)

Since the final result wants to be RGB it's easier to start out that way. Second, you WANT light to be blocked by the peak filters in order to differentiate color.

A good sensor wants resolution AND sensitivity AND accuracy. Since you can't have all three at the same time, you make tradeoffs. Your solution might increase sensitivity, but at the cost of accuracy and resolution.

I'd Rather Have Less Noise, Wider dMax (2, Interesting)

ausoleil (322752) | more than 6 years ago | (#19519357)

Sure, "faster" sensors will be a boon to the consumer market, and will surely have some applications in the pro market as well -- existing light press photography come to mind.

For me, though, the problem is not so much speed as it is noise and dynamic range. That's because a lot of the time I still do fine-art level landscape and studio glamour photography -- neither of which are speed starved, but even the finest digitals could still use even less noise and wider dynamic ranges.

While DSLRs have a huge advantage over handhelds in this regard, it would still be nice to see improvements in s/n such that the darker zones maintained their clarity and detail. Even the finest Canon cameras suffer to a degree in this regard, at least for people with very high standards. Some of us have those standards because that is what our clients demand - and in some cases we still must use film to meet their criteria.

It's a virtual law that to obtain the best noise performance you need to use the lowest ISO speed that the camera can attain. So instead of bottoming out at 100, like most DSLRs, I'd like to see 25. Or better, 12.

For more info, visit http://www.normankoren.com/digital_tonality.html [normankoren.com]

Re:I'd Rather Have Less Noise, Wider dMax (0)

Anonymous Coward | more than 6 years ago | (#19519825)

There's a physical limit to how insensitive you can make a sensor, of course, which is what you're really asking for when you want lower ISO. At a certain point, you're just artificially crippling the technology to get a lower ISO, without any real benefit in terms of noise control.

Current digital sensors actually have pretty good dynamic range compared to film, but it could always be better. HDR is one alternative for the here and now, but future sensors are starting to move from 12-bit to 14-bit outputs, so that'll add some more dynamic range.

I think you really want to look at some of the SuperCCD cameras Fuji's put out, though. They achieve superior dynamic range by actually having sensor sites that aren't all the same size. This allows them to mix small sites for highlights with larger sites for shadows. While there's obviously a corresponding loss in resolution compared to a more traditional sensor, the pictures really do get quite a bit of extra dynamic range from the technique. It's also configurable on the camera, so you can change how the sensor is used depending on your needs.

That said, as much as you might benefit from more dynamic range, I think most people's needs are best served with faster sensors. For one thing, most people use digicams, and they're already pretty horrible at even moderately high ISOs, so any improvement is good (although if it means I can shoot acceptable ISO 6400 on a DSLR, that would be pretty impressive :-) ). And for landscape work, HDR techniques are a real possibility. Admittedly, not so great for studio work.

Where is the transparent pixel? (3, Interesting)

140Mandak262Jamuna (970587) | more than 6 years ago | (#19519423)

The Bayer pattern has one red, one blue and two green sub-pixels per pixel. They could lose one green and replace it with transparent. Or they could come up with a different packing to accomodate a transparent sub-pixel.

One of the problems with DLP projection TVs with a "color wheel" was that since every color lets only 1/3 of the light through, the picture was dim. So they added a fourth element "clear" that lets out all the light to get every projected pixel a blast of light they need and the remaining portions of the color wheel adds only additional brightness for each color.

This technology seems to be kind of similar. The transparent sub pixel detects over all lumninosity and the remaining pixels "adjust" for color. Very close to what we have in our retina too. Almost all our cylindrical cells respond only to luminosity and the cones respond, to varying degrees, three colors. A poster was complaining about losing "color resolution". I think millions of years of evolution has shown us the balance. You need about 90% of the pixels responding to luminosity and just 10% to color. The same ratio in our retina.

"White" sensor? (1)

ek_adam (442283) | more than 6 years ago | (#19520767)

It'd be interesting to see the algorithm for that sensor. The color values of the adjacent pixels are going to have to be taken into account. A bare photoelectric sensor will provide a higher voltage for higher frequencies of light. To a "white" sensor, a blue photon looks brighter than a red one.

More complexity for RAW filters.

This Is Too Obvious (1)

osewa77 (603622) | more than 6 years ago | (#19519519)

This is so obvious - I've personally wondered why 1CCD sensors they don't have a fourth pixel group to carry brightness information only. There must be good reasons why this has not been done before now; I hope we get to find out why.

Why not this pattern (2, Interesting)

Bob-taro (996889) | more than 6 years ago | (#19519533)

The patterns they suggested in the article were not as elegant as the Bayer filter (where each color formed an evenly spaced grid). They may be hiding the actual pattern for now or there may be some technical reason for those patterns that I don't understand, but I would suggest this pattern (C = Clear):

C G C G
B C R C
C G C G
R C B C

it keeps the same 4clear:2green:1red:1blue ratio but the different color pixels all form a regularly spaced grid.

Is the clear array sensitive across the spectrum? (1)

mlts (1038732) | more than 6 years ago | (#19519621)

This gets me wondering:

Does the clear array have a flat sensitivity level across the spectrum? Where it will give the same data value for the same number of photons striking it with a 700nm wavelength as it would for photons striking it that vibrate at 400nm?

If the sensor (for example here) was more sensitive to red, then this would skew the picture results significantly, especially if it picked up and added infrared light to the picture's data which isn't visible to the human eye.

Re:Is the clear array sensitive across the spectru (1)

slagheap (734182) | more than 6 years ago | (#19520189)


Does the clear array have a flat sensitivity level across the spectrum? Where it will give the same data value for the same number of photons striking it with a 700nm wavelength as it would for photons striking it that vibrate at 400nm?

Probably not... but the sensor will have some known characterization and the Bayer->RGB(->jpeg) conversion (that is done in-camera or on the computer if you handle RAW files) will account for this when it reconstructs the full RGB value for each pixel.


If the sensor (for example here) was more sensitive to red, then this would skew the picture results significantly, especially if it picked up and added infrared light to the picture's data which isn't visible to the human eye.

Most digital camera sensors have a infrared filter over them to prevent this. Canon sells a version of their 20D called the 20Da which is specialized for astrophotography. I believe the primary change compared to the standard 20D is the removal of the IR filter.

Re:Is the clear array sensitive across the spectru (1)

MajroMax (112652) | more than 6 years ago | (#19520207)

If the sensor (for example here) was more sensitive to red, then this would skew the picture results significantly, especially if it picked up and added infrared light to the picture's data which isn't visible to the human eye.

I imagine that's part of the reason it hasn't been done yet. Finding the "true luminosity" from a nearby Red, Green, Blue, and Clear CCD is probably nontrivial. I imagine that IR sensitivity isn't as troublesome as you'd suggest, though, since most cameras now come with IR filters over the CCD array. Photographers interested in IR (and UV) photography sometimes have to have that filter removed outright.

Re:Is the clear array sensitive across the spectru (1)

jeiler (1106393) | more than 6 years ago | (#19520327)

Does the clear array have a flat sensitivity level across the spectrum? Where it will give the same data value for the same number of photons striking it with a 700nm wavelength as it would for photons striking it that vibrate at 400nm?
Theoretically, there's no subtance that has a perfectly flat sensitivity level across the spectrum. However, as long as the errors are below the perception level of the sensor, it should work. It's kind of like glass--there's no such thing as a perfectly transparent, color-free glass. But as long as the glass is thin enough that human eyes can't perceive the little bit of light absorption that does occur, it still looks clear.

Re:Is the clear array sensitive across the spectru (1)

ConceptJunkie (24823) | more than 6 years ago | (#19520393)

I would imagine that the camera is built to take into account the sensitivity of the sensor across the spectrum when converting the RGB + Luminance to RGB for output. It would be similar to the same calibration necessary to to get the colors right in the first place. You would have to figure out how the sensor reacts to the R, G, and B wavelength and apply a gamma transformation (or whatever, I'm not photography or light expert) to what the sensors detect to get a result that represents what the human eye would see in the first place. Adding the luminance channel makes it more complicated, but it's still the same kind of problem.

Actually, it still amazes me how complicated color really is.

Re:Is the clear array sensitive across the spectru (1)

blincoln (592401) | more than 6 years ago | (#19520757)

If the sensor (for example here) was more sensitive to red, then this would skew the picture results significantly, especially if it picked up and added infrared light to the picture's data which isn't visible to the human eye.

I would flip that around and say that that behaviour might actually be advantageous. If you're in a low (visible) light situation, maybe you could use an IR flash to get luminance values and merge that with the dim visible colour data to get a halfway-decent colour image with no visible flash.

Double the sensitivity.... (0)

Anonymous Coward | more than 6 years ago | (#19519817)

So Kodak's best cameras would then have acceptable noise at, say, ISO 400?

Other ideas for alternative color patterns (4, Interesting)

Thagg (9904) | more than 6 years ago | (#19520053)

While I like Kodak's idea quite a bit, here are a couple of other ideas.

1) Sony was building cameras for a while with four color channels. There was the normal green, but also a different green they called "emerald" for one of the four Bayer pattern locations. Unfortunately, this was a solution in search of a problem, it never really caught on because there just wasn't any perceived benefit.

2) I do visual effects for films. For the last 50 years or so, people have been using bluescreen and greenscreen effects. The idea is to put a constant color background, and process the image so that any pixels of that color become transparent. Over the years, more and more lipstick has been applied to this pig -- so that you can now often extract shadows that fall on the greenscreen, pull transparent smoke from the greenscreen plate -- these things have become even more possible through digital processing.

Still, it sucks. Greenscreen photography forces so many compromises that I often recommend shooting without it and laboriously hand-rotoscoping the shots.

But -- say you had a fourth color filter, with a very narrow spectral band. Perhaps the yellow sodium color -- commercial lights that put out very narrow-band yellow are sometimes used for street lighting. If you had a very narrow-band sodium filter over 1/4 of the pixels, you could pull perfect mattes without 99% of the artifacts of traditional greenscreen and bluescreen photography. Finally (and this is killer!) you could make glasses that the director of photography and other lighting crew could wear that block just that frequency, so they could see the set as it really is -- without the sodium light pollution.

Still, Kudos to Kodak for thinking outside the box.

Thad Beier

Re:Other ideas for alternative color patterns (1)

kybred (795293) | more than 6 years ago | (#19521271)

Greenscreen photography forces so many compromises that I often recommend shooting without it and laboriously hand-rotoscoping the shots.

I thought that the color for the greenscreen was selected because it didn't appear in human flesh tones. Since it was originally (AFAIK) used for TV studio news so they could put slides (and later video) behind the talking heads. Occassionally you'll see a newsperson that selected a tie with a color too close to the greenscreen color and you'll be able to 'see' through them, though.

Plus, it'd be hard to hand-rotoscope live video. :-)

Film is still better. (-1, Troll)

nurb432 (527695) | more than 6 years ago | (#19520257)

And always will be. There is no debate.

Digital sux.

Re:Film is still better. (1)

Jackie_Chan_Fan (730745) | more than 6 years ago | (#19520713)

Film really is nice, but changing film rolls is something i will never ever want to do again. I'm perfectly happy with a handful of 4GB compact flash cards and my canon eos-3D. Although i want a Mark II-1DS and Mark III ;)

Why does everything need 'bright colors'!? (1)

kkohlbacher (922932) | more than 6 years ago | (#19520283)

...and fancy lights?

My Hitachi CMOS works fine without any fancy color filters. Case mods are getting a little out of hand these days...



{/joke}

Better than Foveon? (2, Informative)

mdielmann (514750) | more than 6 years ago | (#19520491)

I wonder how this is going to compare to the Foveon [foveon.com] sensors. They capture RGB data at all pixels - filtered based on depth rather than location. Now if only those babies cost less.

Alternatives? (1)

BritneySP2 (870776) | more than 6 years ago | (#19521153)

What are the alternatives to using filters? I have been wondering how feasible an approach based on spectrum analysis would be, i. e. if it would be possible to build a matrix of arrays of sensors, with each array having, say, a micro-prism on top of it?
Load More Comments
Slashdot Account

Need an Account?

Forgot your password?

Don't worry, we never post anything without your permission.

Submission Text Formatting Tips

We support a small subset of HTML, namely these tags:

  • b
  • i
  • p
  • br
  • a
  • ol
  • ul
  • li
  • dl
  • dt
  • dd
  • em
  • strong
  • tt
  • blockquote
  • div
  • quote
  • ecode

"ecode" can be used for code snippets, for example:

<ecode>    while(1) { do_something(); } </ecode>
Sign up for Slashdot Newsletters
Create a Slashdot Account

Loading...