Beta

×

Welcome to the Slashdot Beta site -- learn more here. Use the link in the footer or click here to return to the Classic version of Slashdot.

Thank you!

Before you choose to head back to the Classic look of the site, we'd appreciate it if you share your thoughts on the Beta; your feedback is what drives our ongoing development.

Beta is different and we value you taking the time to try it out. Please take a look at the changes we've made in Beta and  learn more about it. Thanks for reading, and for making the site better!

New Camera Sensor Filter Allows Twice As Much Light

Soulskill posted about a year ago | from the what-a-bright-idea dept.

Input Devices 170

bugnuts writes "Nearly all modern DSLRs use a Bayer filter to determine colors, which filters red, two greens, and a blue for each block of 4 pixels. As a result of the filtering, the pixels don't receive all the light and the pixel values must be multiplied by predetermined values (which also multiplies the noise) to normalize the differences. Panasonic developed a novel method of 'filtering' which splits the light so the photons are not absorbed, but redirected to the appropriate pixel. As a result, about twice the light reaches the sensor and almost no light is lost. Instead of RGGB, each block of 4 pixels receives Cyan, White + Red, White + Blue, and Yellow, and the RGB values can be interpolated."

cancel ×

170 comments

Sorry! There are no comments related to the filter you selected.

Wow! Computational Electromagnetics rock! (5, Interesting)

140Mandak262Jamuna (970587) | about a year ago | (#43322283)

"We've developed a completely new analysis method, called Babinet-BPM. Compared with the usual FDTD method, the computation speed is 325 times higher, but it only consumes 1/16 of the memory. This is the result of a three-hour calculation by the FDTD method. We achieved the same result in just 36.9 seconds."

What I don't get is calling the FDTD (finite difference time domain) analysis as the "usual" method. It is the usual method in fluid mechanics. But in computational electromagnetics finite element methods have been in use for a long time, and they beat FDTD methods hollow. The basic problem in FDTD method is that, to get more accurate results you need a finer grids. But finer grids also force you to use finer time steps. Thus if you halve the grid spacing, the computational load goes up by a factor of 16. It is known as the tyranny of the CFL condition. The finite element method in frequency domain does not have this limitation and it scales as O(N^1.5) or so. (FDTD scales by O(N^4)). It is still a beast to solve, rank deficient matrix, low condition numbers, needs a full L-U decomposition, but still, FEM wins over FDTD because of the better scaling.

The technique mentioned here seems to be a variant of boundary integral method, usually used in open domains, and multiwavelength long solution domains. I wonder if FEM can crack this problem.

Re:Wow! Computational Electromagnetics rock! (5, Interesting)

Anonymous Coward | about a year ago | (#43322439)

I'm not sure any of the comparison of FDTD and FEM-FD in this post is right. FDTD suffers from the CFL limitation only in its explicit form. Implicit methods allow time steps much greater than the CFL limit. The implicit version requires matrix inversions at each time step, whereas the explicit version does not. Comparing FEM-FD and FDTD methods is silly. One is time domain, one is frequency domain, they are solving different problems. There is no problem doing FEM-TD (time domain), in which case the scaling is worse for FEM, when compared to explicit FDTD since the FDTD method pushes a vector, not a matrix, requires only nearest neighbor communication, whereas FEM requires a sparse-matrix solve, which is the bane of computer scientists as the strong scaling curve rolls over as N increases. FDTD does not have this problem, requires less memory and is more friendly toward GPU based compute hardware that is starting to dominate todays supercomputers.

The real question is... (5, Funny)

XiaoMing (1574363) | about a year ago | (#43322581)

Interesting comments from both, but I believe you both missed the point. The real question is, which one of these methods, FDTD or FEM-FD, will allow optimal reprocessing in the frequency domain that makes my dinner look prettier with an Instagram vintage filter?

Re:Wow! Computational Electromagnetics rock! (-1, Flamebait)

Anonymous Coward | about a year ago | (#43322803)

Replying anonymous so I don't lose my up-mod on your post.

You can also solve semi-implicit so you don't have to invert a full matrix on each time step. (Nearly the same as explicit discretization + overwrite your solution vector. Gets the stability boost of an implicit method with very easy implementation.)

Re:Wow! Computational Electromagnetics rock! (2)

loufoque (1400831) | about a year ago | (#43322999)

A GPU is actually pretty good at sparse matrix computations, unlike CPUs.

Re:Wow! Computational Electromagnetics rock! (1)

TheTurtlesMoves (1442727) | about a year ago | (#43324257)

Say wa? If you do it right CPUs are very good at sparse matrix methods. Its basic algorithms, lots of zeros and structure give lots of scope for optimization regardless of the target hardware. GPUs may be too and if vectorize properly and avoid branching and can give better performance per dollar. Yes this has been some of my day job.

Re:Wow! Computational Electromagnetics rock! (1)

Anonymous Coward | about a year ago | (#43323567)

It is the usual method in fluid mechanics. But in computational electromagnetics finite element methods have been in use for a long time

Um, they're equations. It's math. The applicable techniques have been used about equally long in all fields. All you're saying here is "ooh, look, my field is better than theirs", and neither is really relevant to the article. "Go Red Sox!" would've been more useful.

(CFD specialist here who's been writing boundary element -- finite volume coupled models for over a decade.)

old news (0)

Anonymous Coward | about a year ago | (#43322287)

Seriously this was in the rags...months ago...not news now.

Re:old news (0)

Anonymous Coward | about a year ago | (#43322497)

You must be new here.
Or you signed up months ago.
But, in any case. Yeah.

cool story bro (0)

Anonymous Coward | about a year ago | (#43323465)

Slashdot, can I buy this RIGHT NOW? No? Then wake me up when I can.

I call bullpucky (-1)

Anonymous Coward | about a year ago | (#43322301)

there is no cyan in the color spectrum.... and if you split the light, then there is no "white light". Foveon so far is the only company that has designed a sensor that actually "sees" RGB. the problem is that it is horrid if there is no an abundance of light to start with.

Re:I call bullpucky (4, Informative)

bugnuts (94678) | about a year ago | (#43322341)

Foveon has 3 photodiodes per pixel, and theoretically should have the most accurate colors and sharpness by avoiding moire and interpolation issues with bayer filters. In practice, though, a lot of light is lost by the time it reaches the 3rd photodiode.

There is indeed white light because not every pixel has a filter over it. Many pixels pass the light through a hole to the pixel, while a neighbor pixel funnels red light (e.g.) to it. Thus, you get white + 1/2 the neighbor's red. You also get half the neighbor's red on the other side, resulting in white + red for the three pixels in a line.

Cyan is part of the color spectrum as a "subtractive color". What remains under each neighbor pixel when you strip away the red, is the cyan.

From what I can tell, this will not get rid of the need for the anti-aliasing.

Re:I call bullpucky (3, Insightful)

ceoyoyo (59147) | about a year ago | (#43322411)

"From what I can tell, this will not get rid of the need for the anti-aliasing."

You ALWAYS need antialiasing when you discretize.

Re:I call bullpucky (5, Funny)

Jafafa Hots (580169) | about a year ago | (#43322521)

"You ALWAYS need antialiasing when you discretize."

That's my motto!

Re:I call bullpucky (0)

Anonymous Coward | about a year ago | (#43322751)

quantize

Re:I call bullpucky (2)

ThePeices (635180) | about a year ago | (#43322863)

"From what I can tell, this will not get rid of the need for the anti-aliasing."

You ALWAYS need antialiasing when you discretize.

I think the word you are looking for is "quantize"

Re:I call bullpucky (2)

EdZ (755139) | about a year ago | (#43322967)

Discretising is just quantising in the spacial domain!

Re:I call bullpucky (1)

dr.g (158917) | about a year ago | (#43324033)

That's "SPATIAL domain"!

Uhh...you insensitive clod?

Re:I call bullpucky (1)

dfghjk (711126) | about a year ago | (#43323663)

Not when you can handle all frequencies that you will encounter. There are cameras on the market without anti-aliasing filters. When you stop down enough your aperture limits resolution to potentially less than the aliasing limit anyway.

Re:I call bullpucky (1)

ceoyoyo (59147) | about a year ago | (#43323785)

If you signal is already low pass filtered you don't need to low pass filter it. Sure, I'll give you that. You've still got antialiasing, you're just not doing it with a piece of glass.

Re:I call bullpucky (1)

dfghjk (711126) | about a year ago | (#43323651)

"From what I can tell, this will not get rid of the need for the anti-aliasing."

Which was not the goal, nor is it a goal of a Foveon sensor. Aliasing exists whenever there is frequency content greater than a sensor can handle.

"Foveon has 3 photodiodes per pixel, and theoretically should have the most accurate colors and sharpness by avoiding moire and interpolation issues with bayer filters."

Foveon does not promise more accurate colors. Sharpness is a function of a number of things, not just photosite layout. Foveon is a loser in the market because it doesn't perform.

Re:I call bullpucky (1)

swalve (1980968) | about a year ago | (#43322435)

All the colors are in the color spectrum. Cyan is between green and blue. About 500 nm wavelength.

Re:I call bullpucky (1)

Anonymous Coward | about a year ago | (#43322597)

Neither brown, black, nor white are in the color spectrum.

http://en.wikipedia.org/wiki/Spectral_color

Re:I call bullpucky (1)

Ford Prefect (8777) | about a year ago | (#43322987)

All the colors are in the color spectrum.

Magenta?

Re:I call bullpucky (3, Informative)

thegarbz (1787294) | about a year ago | (#43323139)

Magenta is a combination of colours just like white isn't "in the colour spectrum".

Indigo/violet however is in the spectrum but as it's outside of the range of values which can be created with red green and blue we approximate it using magenta which is a mixture of blue and red.

Re:I call bullpucky (1)

Khyber (864651) | about a year ago | (#43322659)

"there is no cyan in the color spectrum"

You might want to open your eyes and look in the 490–520nm range on a representation of the visual range of the EM spectrum.

Re:I call bullpucky (3, Funny)

Opportunist (166417) | about a year ago | (#43322755)

Is that one of those colors only women can see? Like mauve?

Re:I call bullpucky (1)

Khyber (864651) | about a year ago | (#43323009)

Actually you'd see those too and more if you took out the part of your eyeball that filters out UV and a few other wavelengths.

Re:I call bullpucky (1)

stenvar (2789879) | about a year ago | (#43324329)

There is really also no "R" in the color spectrum; anything a digital camera captures is going to involve measuring the response of some wide band color filter. Terms like "R", "cyan", and "white" describe roughly what kind of filter we are talking about, enough so that people get an idea of how this and other cameras work.

As for Foveon, it measures "RGB" directly at each pixel, but that's a bad tradeoff: it gives you lower resolution than interpolation, loses a lot of light, and actually doesn't give you much control over the spectral response. And what I really find annoying about "Foveon" is that the name suggests that it has something to do with the "fovea", when in reality a Bayer sensor actually works much more like the human eye.

yeay four sensors (0)

Anonymous Coward | about a year ago | (#43322309)

I've been hoping for 4-sensor cameras for ages. People only have three color sensors, but what those colors are vary a bit from person to person, and capturing 4 colors stands a better chance of getting images that look good for everyone.

Re:yeay four sensors (1)

ceoyoyo (59147) | about a year ago | (#43322419)

Too bad you're displaying them on a screen or printing them with a process that only uses three colours....

Additionally, it's not really a four-different-colour sensor. It's just got a different division of the usual red green and blue, and the result is processed into regular RGB pixels.

Re:yeay four sensors (1)

AK Marc (707885) | about a year ago | (#43322515)

Most of the photo printers will do 4-color. And there are 8-color printers, if you wish (this one under the "consumer" heading, but titled "pro", $500 doesn't sound too pro, and seems more like the fake pro type to suck in the pro-sumer crowd) http://www.usa.canon.com/cusa/consumer/products/printers_multifunction/professional_photo_inkjet_printers/pixma_pro9000_mark_ii [canon.com]

Re:yeay four sensors (3, Informative)

ceoyoyo (59147) | about a year ago | (#43322591)

So when you print to your eight colour inkjet, what file format is your image stored in that has eight colour channels? What software are you using that supports it?

Note that in CMYK, which is the most by far the most popular "four colour" system (and is the one all those "four colour" printers use), black is one of the colours. That makes up for a shortcoming in the colour inks (which is not shared by camera sensors or displays) in which you can't make a decent black by mixing the colours. I suspect the eight colour printer is doing something very similar - mixing colours to give you a better (they say anyway) representation of the three colour additive system that your computer, camera and monitor use.

Besides, the vast, vast majority of people don't colour calibrate their monitors OR printers. Unless you do that regularly all the extra colour channels in the world aren't going to help you.

Screen Printing (2)

Tenebrousedge (1226584) | about a year ago | (#43322951)

When working with designs meant for screen printing, the original artwork was done in RGB, then a team would separate the color channels (in Photoshop), one channel per ink to be used. They could technically do CMYK directly, but it didn't look good for a wide variety of purposes -- you can imagine a flat-filled cartoon character would be pretty much impossible. It would look a bit like comic book halftoning, probably. The shop would use that when they wanted to print Thomas Kincaide-esque sweatshirts for grannies. They would also use additional channels for things that weren't colors, like adhesive (for foil, usually) and clear inks.

I don't imagine that having more than three or four color channels is a new thing, or difficult to deal with. I would imagine even the prosumer technology would allow you to choose between various rendering intents. Probably the color separation is handled at the driver or device level, but TIFF, PDF, and DCS 2.0 (??) should handle extra channels natively.

A few more details on screen printing for those who might care: The actual screen printing process was not computer-controlled as a rule. The smaller shops I worked at printed a transparency which was transferred onto the screen by a photographic process, but the large one had a computer-controlled airjet "printer" that would knock out the design. Usually they would do a few samples by hand, to work out what ink and screen combination to use (different mesh sizes and ink thicknesses produce slightly different effects), and adjust finer details like when you would "flash" the shirt. That is, hitting it with a very high powered xenon lamp for a few seconds to dry the ink, before applying a new layer. You could do some interesting painterly effects with wet-on-wet ink; you can also make a hell of a mess that way. Flashing also tends to affect the color somewhat, especially for temperature-sensitive inks. After you get a few good samples, you send them off to the client as a proof. Then you would set up your automatic press for a run of a couple hundred. Color balance was something that the press operator kept an eye on after that point. After printing, the shirts are sent through a 400 degree open oven on a conveyor belt, for perhaps 10-20 seconds, to cure the ink.

Very fun job, the ink is messy as hell. I would still be doing it, but working with computers pays better.

Re:yeay four sensors (2)

AK Marc (707885) | about a year ago | (#43322953)

I gave an example of a printer with 7 colors+black, and yet you quibble over printer capabilities. Since reality is unrelated to your complaint, I'm curious what it is you are really complaining about. "There's no printer with 4 colours." "Here's one with 7+black" then you go off on some tangent about file support. I don't care. You are wrong. Everything you've said is wrong. I posted a link that proves you 100% wrong, and yet you keep insisting that reality is wrong and you are right. Yeah, I don't know where to go. If I answered your question about software, I'm sure you'd make up some other crap about paper quality or something.

Re:yeay four sensors (4, Informative)

ceoyoyo (59147) | about a year ago | (#43324111)

I'm not complaining about anything. I'm replying to your erroneous assertion (you DID read the whole thread before replying, right?) that the existence of printers with eight inks somehow means they'll be able to reproduce data from a hypothetical four colour channel camera sensor.

I do like your fake quotes though. Please indicate where I said "there's no printer with 4 colours." What I DID say was "Too bad you're displaying them on a screen or printing them with a process that only uses three colours." If you bothered to understand what you're talking about, or even read my comments, you'd realize that the process is indeed three colour. Even if you imagine a four colour camera sensor, the file you store the data in is three colour channel, the software you use to edit it is three colour channel, the screen you show it on is three channel and the data you send to the printer driver is three channel. IF you could somehow send the four channel data to the printer you might be able to reproduce some extra colours (which the vast majority of humanity probably wouldn't be able to see anyway), but probably not very well since all those extra inks are formulated specifically to help reproduce RGB.

Re:yeay four sensors (1)

EdZ (755139) | about a year ago | (#43322985)

what file format is your image stored in that has eight colour channels

Hoo boy, it's colour theory time! Do you want to store your colours as an intensity and a base colour (located in a 3d colour space e.g. L*,a*,b*), or a linear combination of emissive wavelengths, or an explicit spectrum, etc? Storing the exact colour something emits takes a lot of data. Three primaries correctly chosen are sufficient to record every colour the human eye can perceive, however, which is good enough for every application that involves image reproduction for people to look at (hello computational imagers and hyperspectral imagers and astrophotographers, don't kill me!). Unfortunately, we have no way to print those specific primaries, so we have to throw a load more at the paper to build up an approximate of them.

Re:yeay four sensors (1)

ceoyoyo (59147) | about a year ago | (#43323993)

You know, you really should at least glance at the thread before you reply. The OP was enthusiastic about how a four colour channel camera (which this isn't) would improve visual reproduction because it would let you reproduce some intermediate wavelengths that would help better match differences in the frequency sensitivity of different peoples' eyes.

In the first place I doubt very much that there are big differences in the frequency sensitivity of peoples' eyes, except for tetrachromats and colour blind people, since it's determined by chemistry. But relevant to this thread, a four channel sensor wouldn't do you the slightest bit of good unless your screen and/or printer were also capable of reproducing those colours. Then someone replied that four plus colour channel printers are already common. As you point out, those help match the RGB of your other equipment better. But having an eight ink printer isn't going to do you the slightest bit of good trying to match a four channel camera when everything in between is three channel.

Re:yeay four sensors (1)

Anonymous Coward | about a year ago | (#43323095)

> Note that in CMYK, which is the most by far the most popular "four colour" system (and is the one all those "four colour" printers use), black is one of the colours. That makes up for a shortcoming in the colour inks (which is not shared by camera sensors or displays) in which you can't make a decent black by mixing the colours.

Actually it makes up for more serious shortcomings. CMY gives an ugly brown, but the printer could handle this perfectly on it own, adding K when necessary. However, for a long time CMY were dye inks, while K is a pigment ink, and both cannot be mixed. This means that even with CMYK, you cannot get true black in a photograph - because the whole photograph would be printed in CMY only. The title could be printed in K.

With 8 inks, this problem does not apply. Two are lighter versions of CM anyway, and two are different colours (orange and green?). The printer is perfectly capable of calculating the best mix from RGB values.

Re:yeay four sensors (3, Informative)

thegarbz (1787294) | about a year ago | (#43323187)

So when you print to your eight colour inkjet, what file format is your image stored in that has eight colour channels?

You don't seem to understand the purpose of the colours or how colour is managed in a workflow. A file stored in your computer will have a certain gamut, if not specified this gamut is sRGB. Your printer also has a certain gamut. This is a function of the ink, colours it can print and the paper printed on. Colour management will take care of ensuring what you see on your screen will be reproduced on the printer providing the printer is physically capable of printing the colours in the gamut.

This is a quite common problem for instance with a CMYK printer which is unable to print any of the primary colours shown as red green and blue on the monitor. The result is a printer that prints a subset of the available colours a screen can display, but at the same time can print outside the gamut of your monitor too.

You don't need a file that has 8 primary colours to take advantage of the really wide gamuts 8 colour printers can print, you just need maths on your side. The ProPhotoRGB colour space works around this by defining the primary for green and blue as imaginary negative values which don't exist in reality. As such using red, green and blue primaries you can create for instance a colour that *almost* represents a pure cyan.

This is something that many photographers who print images already do. I think even the latest Photoshop comes setup out of the box to import raw camera files using ProPhotoRGB as the working colour space.

Besides, the vast, vast majority of people don't colour calibrate their monitors OR printers. Unless you do that regularly all the extra colour channels in the world aren't going to help you.

You don't know photographers very well do you? The vast majority of amateur and all professional photographers I've ever met calibrate their screens. Printer calibration is often not needed as the vast majority of photographers I know outsource their printing to someone else, and that someone else will typically provide them with the colour profile of their printer's last calibration to ensure accurate results can be obtained. Pretty much every printing company will do this for you, even cheap mass production ones like Snapfish.

Re:yeay four sensors (3, Informative)

ceoyoyo (59147) | about a year ago | (#43323871)

You don't seem to know what we're talking about. Let me quote the OP:

"I've been hoping for 4-sensor cameras for ages. People only have three color sensors, but what those colors are vary a bit from person to person, and capturing 4 colors stands a better chance of getting images that look good for everyone."

Yes, more inks in your printer help it reproduce the RGB values that you capture with your camera, save in your files, display on your screen, and send to the printer. Just like in the example I gave, the K channel in CMYK helps make up for deficiencies in the mixing properties of the C, M and Y that don't let you make a proper black by mixing. Extra ink won't do squat to match extra colour information from a theoretical extra colour sensor in the camera though, because everything in between is RGB.

Yes, actually, I know lots of photographers. I calibrate my screen, and I use a printer I chose specifically because they do a good job of frequent calibration. Most professional photographers do. But if you haven't noticed, with the availability of digital cameras a LOT of people took up photography. Hardware screen calibrators are still a niche item, nowhere near as popular as cameras. In particular, Panasonic doesn't make any still cameras that are likely to be used extensively by professionals, so it's likely that even fewer people who shoot Panasonic would calibrate their equipment.

Re:yeay four sensors (0)

Anonymous Coward | about a year ago | (#43323623)

At least when it comes to consumer printers, the extra channels are typically translucent versions of CMYK that allow light to pass through the pigments. So you have cyan and translucent "light cyan", etc. Means you can maintain a broader gamut at higher resolutions without halftoning in ways that create visibly noticable patterns. The extra channels normally are handled by firmware in the printer or the printer driver software, so all you're dealing with anyways is the original CMYK. (Or RGB if you trust the computer to get things close enough.)

Re:yeay four sensors (1)

X0563511 (793323) | about a year ago | (#43322781)

The K in CMYK is not a "color."

Re:yeay four sensors (1)

AK Marc (707885) | about a year ago | (#43322947)

Well, when printing in 8 colors, only one is black, that leaves 7 actual colors. And every bubble jet I've bought in the last 10 years supports 4 colors plus black. Just look for anything that indicates "photo", and that often comes with extra color capabilities, at least it has for any printer I'd considered.

Re:yeay four sensors (1)

dwywit (1109409) | about a year ago | (#43323043)

But it must be treated as a colour, for computational purposes - it's represented by values at one extreme, as is white at the other extreme.
 
In other words, your statement is irrelevant to this discussion.

Just say no to Gizmodo (1)

Anonymous Coward | about a year ago | (#43322311)

This is a really cool new tech. Wonder when it will make it into consumer cameras? Also, could have done without the Gizmodo link - the third link is sufficient to get the information without giving click traffic to those whores.

Re:Just say no to Gizmodo (3, Interesting)

tloh (451585) | about a year ago | (#43322517)

Ironically, the last paragraph at Gizmodo somewhat answers your question:

What's particularly neat about this new approach is that it can be used with any kind of sensor without modification; CMOS, CCD, or BSI. And the filters can be produced using the same materials and manufacturing processes in place today. Which means we'll probably be seeing this technology implemented on cameras sooner rather than later.

Another sampling of reality's infinite colours (0)

Anonymous Coward | about a year ago | (#43322313)

Realistically light is not made up of R G & B, but humans see light (mainly) in those three wavelengths. As humans we can't tell the difference between light at one specific (within reason) frequency vs a mixture of colours making the same average frequency. How long do you think until we have technologies that can both capture and reproduce imagery made with more than 3 or 4 colour samples?

Why does this matter? Until things improve other animals are still going to think our photos look weird. Oh an the gamut of photos sucks, but really, while birds are judging the posters on my wall poorly I can't care about anything else.

Re:Another sampling of reality's infinite colours (1)

Arkh89 (2870391) | about a year ago | (#43322339)

Try multispectral-imagery. Then improve to spectropolarimetry-imagery and then compressive-spectropolarimetry-imagery and then compressive-spectropolarimetry-integral-imagery and then...

Re:Another sampling of reality's infinite colours (1)

hedwards (940851) | about a year ago | (#43322391)

The problem with that is space, you'd have to either substitute the greens for the extra colors or you'd have to have an additional photosite in the mix. I suppose you could stack it, but that has it's own issues with regards to resolution.

Gamut on Cameras is perfectly fine, at least until we get better methods of display and these are photos for people, not birds.

So essentially... (2)

rusty0101 (565565) | about a year ago | (#43322343)

...we've switched from calculating rggb values based on attenuated rggb values sensed, to calculating rgb values from sensing cyan (usually a color of reflected light with red subtracted, white+blue ?, white+red ?, and yellow (again reflected white light minus the blue spectral light.)

I can see the resulting files having better print characteristics, if the detectors sense to the levels close to the characteristics of ink used for prints, but I don't think that's going to help at the display the photographer will be using to manipulate the images.

And of course neither variety of photo image capture is comparable to the qualities of light that our rods and cones respond to in our eyes.

Re:So essentially... (2)

Lehk228 (705449) | about a year ago | (#43322381)

it means for any given sized sensor, a higher percentage of the incoming photons are captured for analysis

how this advantage is used it up to the engineers.

it could be used to make sensors that are smaller and just as good as current sensors, or better quality out of the same sensors. because this improvement is in the signal/noise domain it will also allow for better high speed image capture.

Re:So essentially... (2)

EdZ (755139) | about a year ago | (#43322993)

it could be used to make sensors that are smaller and just as good as current sensors

I'm not sure if it could. Pixel sizes for really tiny cameraphone sensors (1.1 microns, or 1100 nm) are getting close to the wavelength of visible red photons (750 nm). If you shrink them anymore, Quantum Stuff starts to happen which you may not really want to happen.

Re:So essentially... (3, Interesting)

Rockoon (1252108) | about a year ago | (#43323521)

If you shrink them anymore, Quantum Stuff starts to happen which you may not really want to happen.

..unless you embrace the Quantum Stuff and deal with the consequences. One of the nice things about Quantum Interference is that its well defined, unlike other forms of noise.

Re:So essentially... (1)

dfghjk (711126) | about a year ago | (#43323699)

Or it may not be an advantage at all.

It is possible that the extra photons not being passed through traditional filters will actually degrade performance. In the past there have been complementary Bayer filter arrays for the same purpose, improved light sensitivity. These cameras delivered inferior color performance.

It is important to have good light sensitivity AND good dynamic range. Dynamic range is not just what your sensor can provide but what you can consistently use. Sometimes filtering light improves the utilization of dynamic range and makes the system better even if it hurts light sensitivity.

Re:So essentially... (1)

Zorpheus (857617) | about a year ago | (#43324351)

Makes me wonder why cyan magenta and yellow sensors were not used from the beginning. It should be as easy to build as RGB, but it should also get more light since only a smaller part of the spectrum needs to be blocked for each pixel.

Re:So essentially... (1)

hedwards (940851) | about a year ago | (#43322399)

The only difference here is that rather than using lenses to focus the light onto individual photosites, they're splitting the light to hit those same photosites. So, at least in theory, you're getting more of the photons as the ones that were being blocked by the filters aren't being wasted.

Re:So essentially... (1)

ceoyoyo (59147) | about a year ago | (#43322449)

"And of course neither variety of photo image capture is comparable to the qualities of light that our rods and cones respond to in our eyes."

You're right. The colour filters used in cameras generally need extra filtering to block out portions of the IR and UV that our eyes are not sensitive to.

Re:So essentially... (2)

PhrostyMcByte (589271) | about a year ago | (#43322473)

I can see the resulting files having better print characteristics, if the detectors sense to the levels close to the characteristics of ink used for prints, but I don't think that's going to help at the display the photographer will be using to manipulate the images.

You can losslessly, mathematically translate between this and RGB (certainly not sRGB) and CMYK. But that's just math. Printing is difficult due to the physical variables of the subtractive color model. The more money you throw at it -- that is to say, the better and more inks and quality of paper you use -- the better it gets. No new physical or mathematical colorspace will improve color reproduction.

Re:So essentially... (2)

Rockoon (1252108) | about a year ago | (#43323547)

No new physical or mathematical colorspace will improve color reproduction.

'cept we arent dealing with 'preproduction' - we are dealing with 'capture' - while the RGB color space can indeed encode "yellow" it cannot encode how it got to be yellow (is it a single light wave with a wavelength of 570 nm, is it a combination of 510 nm and 650 nm waves, or is it something else?)

(hint: Your monitor reproduces yellow by combining 510 nm and 650 nm waves, but most things in nature that appear yellow do so because the waves are 570 nm)

Re:So essentially... (5, Informative)

Solandri (704621) | about a year ago | (#43322763)

...we've switched from calculating rggb values based on attenuated rggb values sensed, to calculating rgb values from sensing cyan (usually a color of reflected light with red subtracted, white+blue ?, white+red ?, and yellow (again reflected white light minus the blue spectral light.)

Your eyes actually aren't sensitive to red, green, and blue. Here are the spectral sensitivities [starizona.com] of the red, green, and blue cones in your eye. The red cones are actually most sensitive to orange, green most sensitive to yellow-green, and blue most sensitive to green-blue. There's also a wide range of colors that each type of cone is sensitive to, not a single frequency. When your brain decodes this into color, it uses the combined signal it's getting from all three types of cones to figure out which color you're seeing. e.g. Green isn't just the stimulation of your green-yellow cones. It's that plus the low stimulation of your orange cones and blue-green cones in the correct ratio.

RGB being the holy trinity of color is a display phenomenon, not a sensing one. In order to be able to stimulate the entire range of colors you can perceive, it's easiest if you pick three colors which stimulate the orange cones most and the other two least (red), the green-blue cones most and the others least (blue), and the green-yellow cones most but the other two least (green). (I won't get into purple/violet - that's a long story which you can probably guess if you look at the left end of the orange cones' response curve.) You could actually pick 3 different colors as your primaries, e.g. orange, yellow, and blue. They'd just be more limited in the range of colors you can reproduce because their inability to stimulate the three types of comes semi-independently. Even if you pick non-optimal colors, it's possible to replicate the full range if you add a 4th or 5th display primary. It's just more complex and usually not economical (Panasonic I think made a TV with extra yellow primary to help bolster that portion of the spectrum).

But like your eyes, for the purposes of recording colors, you don't have to actually record red, green, and blue. You can replicate the same frequency response spectrum using photoreceptors sensitive to any 3 different colors. All that matters is that their range of sensitivity covers the full visible spectrum, and their combined response curves allow you to uniquely distinguish any single frequency of light within that range. It may involve a lot of math, but hey computational power is cheap nowadays.

It's also worth noting that real-world objects don't give off a single frequency of light. They give off a wide spectrum, which your eyes combine into the 3 signal strengths from the 3 types of cones. This is part of the reason why some objects can appear to shift relative colors as you put them under different lighting. A blue quilt with orange patches can appear to be a blue quilt with red patches under lighting with a stronger red component. The "orange" patches are actually reflecting both orange and red light. So the actual color you see is the frequency spectrum of the light source, times the frequency emission response (color reflection spectrum) of the object, convolved with the frequency response of the cones in your eyes. And when you display a picture of that object, your monitor is simply doing its best using three narrow-band frequencies to stimulate your cones in the same ratio as they were with the wide-band color of the object. So a photo can never truly replicate the appearance of an object; it can only replicate its appearance under a specific lighting condition.

Re:So essentially... (1)

WGFCrafty (1062506) | about a year ago | (#43323047)

What's happening when i shine my violet laser at a tennis ball green dog toy and it seems to get brighter and reflect white, or on a marble coffee table and it gets blue-white? Really liked your breakdown.

Re:So essentially... (2, Informative)

Anonymous Coward | about a year ago | (#43323151)

Then the material is phosphorous.
The photons from the light source are able to put electrons in the material in a higher orbit (skipping at least one orbit level), then when the electron drops its orbit it doesn't go all the way back to the original orbit. Since the distance of the electron going up, is not the same as going down, the photon produces is of a different frequency (color) than the photon from the light source.

The second drop of the electron to the original orbit will also cause another photon to be released which would be a third colour.

Re:So essentially... (1)

Anonymous Coward | about a year ago | (#43323569)

s/phosphorous/fluorescent

Phosphorous is one substance that can has this property, but the general term is fluorescence.

This is also used in washing powders to give the "whiter than white" look, where they are re-emitting UV light as visible, making the clothes look brighter (and is why they often glow in clubs)

Re:So essentially... (4, Informative)

Rich0 (548339) | about a year ago | (#43323973)

Yup - this is fluorescence.

It is worth nothing that a related term is phosphorescence, which is what most people think of when they thing of phosphors. For the benefit of those reading, the two are basically the same phenomena on different timescales.

When light hits an object that is fluorescent it absorbs the light and re-emits it. The re-emitted light has a different spectrum than the absorbed light. The re-emitted light is also emitted AFTER the light is absorbed. In most cases it is emitted almost instantaneously and this is called fluorescence. However, some materials take much longer to emit the absorbed energy as light and this is called phosphorescence.

So, that T-shirt that lights up under a blacklight is exhibiting fluorescence. The watch hands that continue to glow 30 seconds after going from daylight to darkness is exhibiting phosphorescence. They're the exact same thing, but with different dynamics. They both involve electrons absorbing energy and releasing it, but with phosphorescence they get stuck in metastable states (read wikipedia for a decent explanation, but a full one requires a bit more quantum physics than I've mastered).

Re:So essentially... (1)

dkf (304284) | about a year ago | (#43323249)

So the actual color you see is the frequency spectrum of the light source, times the frequency emission response (color reflection spectrum) of the object, convolved with the frequency response of the cones in your eyes.

What's more, many surfaces reflect different colors at different amounts depending on the exact angle you view them at. Butterfly wings are an extreme example of this, or soap bubbles, but the phenomenon is common. (If you ever want to write a physically-accurate ray tracer, you get to deal with a lot of this complexity.) This can make a surface made of a single substance look very different across it. Now, these effects are functions of the wavelength of the incoming light (and the reflection angle, with the surface as a general functional parameter) so you can't just use a simple tri-chromatic approach when calculating what happens. More convolution! Yay!

The good thing about the new sensor is that it is trying to not throw away photons, which should greatly improve performance in low light. (For some reason, most pictures I take seem to be in low light level conditions. Probably due to spending too much time in meetings.)

Good luck with that one, Panasonic (1)

damn_registrars (1103043) | about a year ago | (#43322347)

Remember how the Foveon X3 sensor [wikipedia.org] was supposed to revolutionize digital photography and make the standard sensors obsolete? Tell me how many cameras you've used with those sensors in them.

In other words, technological superiority doesn't always win in digital photography.

I agree with point, but the Foveon works... (4, Informative)

SuperKendall (25149) | about a year ago | (#43322403)

In other words, technological superiority doesn't always win in digital photography.

This is very true, although the Foveon was superior in resolution and lack of color moire only - it terms of higher ISO support it has not been as good as the top performers of the day.

But the Foveon chip does persist in cameras, currently Sigma (who bought Foveon) still selling a DSLR with the Foveon sensor, and now a range of really high quality compact cameras with a DSLR sized Foveon chip in it. (the Sigma DP-1M, DP-2M and DP-3M each with fixed prime lenses of different focal lengths)

I think though that we are entering a period where resolution has plateaued, that is most people do not need more resolution than cameras are delivering - so there is more room for alternative sensors to capture some of the market because they are delivering other benefits that people enjoy. Now that Sigma has carried Foveon forward into a newer age of sensors they are having better luck selling a high-resolution very sharp small compact that has as much detail as a Nikon D800 and no color moire...

Another interesting alternative sensor is Fuji with the X-Trans sensor - randomized RGB filters to eliminate color moire. The Panasonic approach seems like it might have some real gains in higher ISO support though.

Foveon is awful at colors (1)

enos (627034) | about a year ago | (#43322975)

This is very true, although the Foveon was superior in resolution and lack of color moire only

Foveon is only superior in resolution if the number of output pixels is the same. But if you count photosites, i.e. 3 per pixel in a Foveon, then Bayer wins. A Foveon has about the same resolution as a Bayer with twice the pixel count, but the Foveon has three times the number of photosites.

But the problem is colors.

Foveon has a theoretical minimum color error of 6%. Color filter sensors (eg. Bayer) have a theoretical minimum error of 0%. Color filter sensors can use organic filters that are close to the filters used by the human eye. Foveon is based on the filtering effect of metals. In addition, there is significant overlap between the sensitivities of the three layers (a red photon may excite any of the three layers, for example). This leads to metamerism, where two colors perceived the same to the human eye will look like two different colors to a Foveon, or vice versa. Good luck matching makeup to clothes for a fashion shoot.

In addition, the Foveon has horrible effects when colors clip. If you shoot a bright red flower and the red is overexposed, it will "blow out". On a Bayer sensor this looks like a very red flower. The detail might be gone and it's not pretty, but it's red. On a Foveon it turns grey. The image processor tries to fix this, but even that's a recent advancement.

The sad thing about the Foveon is that it would make a great video sensor. It has good on-chip binning and could do live-view or movies long before anyone else could. Sigma threw away this competitive advantage.

False information (1)

SuperKendall (25149) | about a year ago | (#43324313)

Foveon is only superior in resolution if the number of output pixels is the same.

That is a pretty bad way to measure things, because it ignores things like color moire and other artifacts you get with bayer sensors. As I stated, resolution is not everything. And a Foveon chip delivers a constant level of detail, whereas a bayer chip inherantly will deliver levels of detail that vary by scene color.

In a scene with only red (say the hood of a red car) you are shooting with just 1/3 of the camera sensors capturing detail when you shoot with a bayer chip. So the red flower you are about to bring up is converting your 30MP bayer camera into a 7.5 MP camera. This is easy to see when you shoot a color resolution chart. [ddisoftware.com]

But the problem is colors.

The solution is Foveon, which has more accurate colors overall and treats all colors equally in terms of capturing detail.

This leads to metamerism, where two colors perceived the same to the human eye will look like two different colors to a Foveon,

In nine years of shooting with Foveon sensors I have never once seen that happen. In nine years of seeing people like you claim that none have ever been able to show a single image that exhibits this effect.

That's the problem with people that live in a world of theory vs. understanding what cameras can (and cannot) do by shooting them. Instead you pretend you understand what they will do because of your THEORY of how they will work, compared to the real world where the whole camera is a series of many different components and software, any one of which may compensate for issues that arise in one part of the system.

If you shoot a bright red flower and the red is overexposed, it will "blow out". On a Bayer sensor this looks like a very red flower.

Sorry but it goes pink regardless of camera. [flickr.com]

Again, if you shot real cameras instead of just leaning on theory you would understand this.

The sad thing about the Foveon is that it would make a great video sensor.

In reality all of the strengths of the Foveon chip do not matter in video, bayer works well there because of inherent color and detail smoothing helping in a scene with motion. That's why it has never been considered seriously for consumer video applications (it has some place in scientific video capture).

Re:I agree with point, but the Foveon works... (1)

WGFCrafty (1062506) | about a year ago | (#43323057)

Quite right.

Some of the more impressive shots I've seen were on an A series (A85) 4MP camera which can be had for thirty bucks, and some majestic HDRs from a 6mp Konica Minolta. If you have a decent camera and time and tenacity you can make pretty pictures. And conversely I am sure it wouldn't take long to find someone who should just go sell their 5D.

Re:I agree with point, but the Foveon works... (1)

dfghjk (711126) | about a year ago | (#43323917)

"This is very true, although the Foveon was superior in resolution and lack of color moire only - it terms of higher ISO support it has not been as good as the top performers of the day."

The Foveon has always been inferior in resolution overall photosite-for-photosite, superior only is a small subset of color combinations, and it has been, in fact, a dismal technology in terms of high ISO. It is not simply "not been as good as the top performers", it is notably worse than Bayer sensors categorically. Foveon is horrible in low light.

"Now that Sigma has carried Foveon forward into a newer age of sensors they are having better luck selling a high-resolution very sharp small compact that has as much detail as a Nikon D800 and no color moire..."

Foveon fanboy alert. Anyone who would rate this as informative has never studied this topic nor understands what the D800 is.

Here's an interesting read here: http://www.luminous-landscape.com/reviews/cameras/sigma_dp2m_review.shtml [luminous-landscape.com]

I despise Luminous Landscape, but if they can recognize technical flaws then they are readily apparent.

The most interesting criticisms of their DP2 review are color problems. The key thing about digital photography is that you are employing a system, and Foveon draws from a small subset of lenses, a small selection of inferior bodies, and poor software support which is a critical handicap.

"Another interesting alternative sensor is Fuji with the X-Trans sensor - randomized RGB filters to eliminate color moire."

Randomizing the Bayer pattern doesn't eliminate color moire.

Re:I agree with point, but the Foveon works... (1)

SuperKendall (25149) | about a year ago | (#43324231)

The Foveon has always been inferior in resolution overall photosite-for-photosite, superior only is a small subset of color combinations

The "small subset" is any photographic subject with blue or red. Like fall leaves, anything with detail against a sky, red or blue fabrics with fine detail, etc.

That sure is a "small subset".

it has been, in fact, a dismal technology in terms of high ISO

In the past possibly. The current cameras handle ISO up to ISO 1600 well in color, up to ISO 6400 in B&W.

An ISO 6400 example [flickr.com] .

Anyone who would rate this as informative has never studied this topic nor understands what the D800 is.

From your very link:

"But, as I wrote, this is the stuff of web forum fights, not something that serious photographers really spend that much time fussing about."

And the D800e I understand quite well, having shot it before. It has no AA filter which makes it somewhat sharper but also gives you some "nice" color moire (especially in fabric). You plainly don't understand the camera too well yourself if you don't understand this tradeoff that Bayer sensor cameras have to make.

The most interesting criticisms of their DP2 review are color problems.

Not really since it has more accurate colors than most other cameras, and more importantly consistent colors within any given scene. No camera will give you accurate colors 100% of the time [bing.com]

But if you can edit the image or apply a custom white balance and have all of the colors be relatively accurate, then you are better off.

Foveon draws from a small subset of lenses, a small selection of inferior bodies, and poor software support which is a critical handicap.

And from this it's easy to discern you are just one of the small cadre of ignorant Fovoen haters, since we are talking about a camera with a fixed lens (that exceeds Leica in quality BTW) so pretty obviously the lens selection is irrelevant.

I'll let you have the last word, since I have properly debunked your ignorance and shown people just how little you really know about an area you pretend to understand.

Re:Good luck with that one, Panasonic (1)

hedwards (940851) | about a year ago | (#43322409)

Foveon was never superior, if they had been able to make it work properly it would have taken over, but it's always had issues with noise and resolution that the CMOS and CCD sensors don't. It's a shame because I wanted it to win, but realistically it's been like a decade and they still haven't managed to get it right, they probably won't at this rate.

Re:Good luck with that one, Panasonic (-1)

Anonymous Coward | about a year ago | (#43322457)

If only I could get Space Nutters to see that we will never mine asteroids, live on Mars or colonize the universe in a decade.

Re:Good luck with that one, Panasonic (1)

hedwards (940851) | about a year ago | (#43324155)

Point being?

It's been a decade and there's no sign of progress on the issue. And the people pushing the technology thought we'd be there by now. Those things you list are far, far more difficult and there isn't already a technology that does any of those things.

Re:Good luck with that one, Panasonic (1)

ceoyoyo (59147) | about a year ago | (#43322431)

Whether or not the Foveon is technologically superior is pretty debatable. It was a neat idea that had some pretty serious shortcomings and, even forgiving those, the difficulty of producing the things left them in the dust as conventional sensors improved.

Re:Good luck with that one, Panasonic (1)

arglebargle_xiv (2212710) | about a year ago | (#43323215)

In other words, technological superiority doesn't always win in digital photography.

In Panasonic's case it's not achieving superiority but dealing with inferiority, their consumer-grade camera sensors have always had terrible problems with chroma noise in low-light conditions, so this may just be a way of improving the low-light performance.

Re:Good luck with that one, Panasonic (1)

thegarbz (1787294) | about a year ago | (#43323301)

Depends on what technological superiority means. In photography light sensitivity is absolutely key for trying to sell a sensor. Most people are interested in figures for noise and range of ISO settings (provided the camera has more than about 12mpxl otherwise they are interested in more resolution too). Foveon failed in all these regards. Their superior colour rendition and absolute lack of moire did not help them at a time when people were scratching their heads at the low resolution and poor sensitivity.

The only people who are interested in absolute accuracy are scientists and astronomers, and there are a myriad of quality CCDs on the market to meet their needs.

Re:Good luck with that one, Panasonic (1)

dfghjk (711126) | about a year ago | (#43323941)

"Their superior colour rendition ..."

Foveon NEVER had superior color rendition. All it offers is lack of color moire at the expense of many other flaws that are, in the balance, vastly more important. Color moire is not the most problematic issue in digital photography.

Re:Good luck with that one, Panasonic (1)

dunkelfalke (91624) | about a year ago | (#43323503)

Two, actually. Sigma SD9, Sigma SD14.

I just wish they would... (4, Interesting)

FlyingGuy (989135) | about a year ago | (#43322421)

Simply use three sensors and a prism. The color separation camera has been around for along time and the color prints from it are just breath taking. Just use three really great sensors then we can have digital color that rivals film.

Check out the work of Harry Warnecke and you will see what I mean.

Re:I just wish they would... (1)

Lehk228 (705449) | about a year ago | (#43322483)

they don't because that makes the whole thing much bigger and require much more servicing and calibration

Re:I just wish they would... (4, Informative)

gagol (583737) | about a year ago | (#43322733)

You mean like those 3CCD cameras used to shoot pro broadcast? They have been around for years, if not decades. Consumer goods are another story.

Re:I just wish they would... (1)

TheLink (130905) | about a year ago | (#43323015)

Structural coloration is possible: http://en.wikipedia.org/wiki/Structural_coloration [wikipedia.org]

So using similar concepts couldn't they use some nanotech structures to split/redirect the colours?

Re:I just wish they would... (0)

Anonymous Coward | about a year ago | (#43322527)

also known as "every non-consumer digital video camera"? sure, that'd be nice, but I am quite enamored of my 36x24 sensor, and fitting three of those into a manageable 35mm-like body would be a true feat -- especially with a mirror cage and image processor. I wouldn't be too surprised to see it in the mirrorless segment, or maybe even a super high-end Sony SLT "SLR", but never going to happen for "real" cameras.

Re:I just wish they would... (1)

dfghjk (711126) | about a year ago | (#43323989)

3CCD will disappear from the video segment as well. I would say RED is "non-consumer digital video" and I don't see 3CCD there.

Re:I just wish they would... (0)

Anonymous Coward | about a year ago | (#43322575)

3ccd cameras?

Re:I just wish they would... (3, Insightful)

D1G1T (1136467) | about a year ago | (#43322667)

Er, you mean like in the 3 chip cameras Panasonic has been making for decades?

colors look awful (1)

enos (627034) | about a year ago | (#43322991)

a 3-ccd camera has awful color rendition.
The extra space between lens and sensor also makes for worse lenses (wide-angle at least, telephotos don't care).

Re:I just wish they would... (0)

Anonymous Coward | about a year ago | (#43323003)

Googling didn't make it obvious where I should look to see good examples of the work of Harry Warnecke. Can you give a link? (Would you like to write a wikipedia entry for him?)

Re:I just wish they would... (2, Interesting)

Anonymous Coward | about a year ago | (#43323183)

Pro video 3CCD cameras do this. Interestingly those cameras can make use of a trick so that the lens becomes cheaper.
Normally a lens needs to focus all three colours on the same plane, this is difficult due to the prism effects of a lens, therefor normal lenses need to use glass from two different materials with different refractive indices to compensate for this.

Since the colour for a 3CCD video camera is split, you can simply place each sensor on the focus plane of each colour for a non-compensating lens.

Re:I just wish they would... (1)

dfghjk (711126) | about a year ago | (#43323963)

"Just use three really great sensors then we can have digital color that rivals film."

Digital surpassed film long, log ago.

"Three sensors and a prism" is not a new idea nor has it escaped camera manufacturers. What do you think "3CCD" means on video cameras? Given that, don't you think the lack of that technology in stills might be for a reason?

one f stop (1)

VerdantHue (1154045) | about a year ago | (#43322677)

Twice as much light equals one f stop. Significant, but not game changing.

Nature? (0)

Tablizer (95088) | about a year ago | (#43322687)

Why not do it like the human eye does it: most sensor cells are only highly sensitive general censors with relatively infrequent color sensors in the mix. It seems the brain does fine with "spotty" color coverage.

Re:Nature? (1)

rusty0101 (565565) | about a year ago | (#43322765)

The problem is that the sensor system on a camera is not collecting an image destined for the human brain at a given moment. It's dumping data to best represent the original color spectrum that the human eye is able to sense, across the entire field of view of the sensor. As a result of that you are presented with an image via a screen or print, that allows you to look at any portion of the image and gather the approximate image that the sensor received.

A better question would be why don't we build displays that trigger the color and intensity functionality of the retina and as the eye and head move, the portion of the image that the color sensing rods on your retina are focused on receive the appropriate color stimulation, the portions that are only sensitive to intensity get grayscale levels. As you move your eyes and head, the system recognises where on the display you are looking and sets the correct stimulation levels to allow the mind to perceive that the screen is showing a full color image.

There are two requirements for that to function. The first would be that the display has to be able to detect where your eye is looking. Three cameras at the display should be enough to do that if the displays are planar, for non-planar displays, or huds, a head mounted sensor system of three camera's (and related hardware to sense where the pupil is looking) could provide the needed information. The second requirement would be to map out the user's retina for where color and intensity sensors are, and then apply that information to the portions of the screen that the eyes are looking at.

We are at the point where we could do that without too much difficulty. Well, other than adding another two front facing cameras to screens that have a webcam already. The display driver would have to be updated, and I suspect that most of the medium to high end video cards are capable of the necessary processing.

All that said, this is functionally similar to ray tracing, which has been offered as the 'next great improvement in display technology' for some 20 years. I kind of don't expect we'll see it before we see holographic memory become widely available.

But then again...

Re:Nature? (1)

Anonymous Coward | about a year ago | (#43322807)

Why not do it like the human eye does it

Stare at THIS word and try to use your peripheral vision to read the rest of the sentence. The brain does tricks to make our mediocre eyes perform well. A camera that works like human eyes and a brain would fake a picture by stitching together big blurry images with small detailed ones.

Re:Nature? (1)

wonkey_monkey (2592601) | about a year ago | (#43322979)

That sort of is what they do. In a normal CCD there are two green pixels for every red or blue. Green is the frequency we're most sensitive to, so is (fairly close to) a "brightness" measurement. This new sensor has come a little closer since they have white+ pixels (twice as many as the other colour pixels) as well.

Re:Nature? (1)

TheTurtlesMoves (1442727) | about a year ago | (#43324349)

Well these in fact do that. Using more green in the bayer patten is using more area for luminance. Some bayer pattens even have "whites". The idea is that you get roughly full resolution luminance but lower resolution color information. Which is how we see. Which is why jpeg etc also do this.

why RGB? (1)

kipsate (314423) | about a year ago | (#43323943)

Why is RGB used for filtering at all? Wouldn't it be better to use the inverse (i.e., CMY or no-red, no-green, no-blue) instead? Wouldn't that allow twice as much light to pass through? I must be missing something obvious, someone care to explain what I am missing here?
Load More Comments
Slashdot Login

Need an Account?

Forgot your password?
or Connect with...

Don't worry, we never post anything without your permission.

Submission Text Formatting Tips

We support a small subset of HTML, namely these tags:

  • b
  • i
  • p
  • br
  • a
  • ol
  • ul
  • li
  • dl
  • dt
  • dd
  • em
  • strong
  • tt
  • blockquote
  • div
  • quote
  • ecode

"ecode" can be used for code snippets, for example:

<ecode>    while(1) { do_something(); } </ecode>