Beta
×

Welcome to the Slashdot Beta site -- learn more here. Use the link in the footer or click here to return to the Classic version of Slashdot.

Thank you!

Before you choose to head back to the Classic look of the site, we'd appreciate it if you share your thoughts on the Beta; your feedback is what drives our ongoing development.

Beta is different and we value you taking the time to try it out. Please take a look at the changes we've made in Beta and  learn more about it. Thanks for reading, and for making the site better!

World's First Full HDR Video System Unveiled

CmdrTaco posted more than 3 years ago | from the life-in-contrast dept.

Graphics 107

Zothecula writes "Anyone who regularly uses a video camera will know that the devices do not see the world the way we do. The human visual system can perceive a scene that contains both bright highlights and dark shadows, yet is able to process that information in such a way that it can simultaneously expose for both lighting extremes – up to a point, at least. Video cameras, however, have just one f-stop to work with at any one time, and so must make compromises. Now, however, researchers from the UK's University of Warwick claim to have the solution to such problems, in the form of the world's first full High Dynamic Range (HDR) video system."

cancel ×

107 comments

Sorry! There are no comments related to the filter you selected.

Wow! (2, Insightful)

snspdaarf (1314399) | more than 3 years ago | (#34921198)

Now p0rn can be filmed in sunlight and shadow at the same time!

Re:Wow! (1)

Anonymous Coward | more than 3 years ago | (#34921634)

Only fitting that a website called "Jizzmag" should reveal it to the world.

Re:Wow! (1)

donotlizard (1260586) | more than 3 years ago | (#34921670)

And black on black porn. You know, because dark skin may be hard to see in crappy video.

Re:Wow! (0)

Anonymous Coward | more than 3 years ago | (#34921890)

That's why the bukkake shots are filmed before-hand. All that white helps improve the contrast.

More interesting than 3D? (5, Insightful)

Anonymous Coward | more than 3 years ago | (#34921204)

Personally, I think the HDR screen described, with HDR videos, would be more interesting and immediately useful than the ever-so-commonly-advertised but ever-so-rarely-purchased "3D" screens.

Re:More interesting than 3D? (0)

Anonymous Coward | more than 3 years ago | (#34921706)

Personally, I think the HDR screen described, with HDR videos, would be more interesting and immediately useful than the ever-so-commonly-advertised but ever-so-rarely-purchased "3D" screens.

Once 3d porn starts becoming mainstream, ALL screens will become 3d.

Re:More interesting than 3D? (1)

macshit (157376) | more than 3 years ago | (#34923986)

Personally, I think the HDR screen described, with HDR videos, would be more interesting and immediately useful than the ever-so-commonly-advertised but ever-so-rarely-purchased "3D" screens.

I totally agree -- it's quite clear the consumer electronics industry is thrashing around like crazy trying to find something to convince people to re-buy all their equipment, but the 3d tech they seem to have chosen is so completely "meh" (if not downright "ugh") that it seems almost guaranteed to fail to live up to the hopes they've pinned on it...

HDR display tech, by contrast [haha], works quite well, isn't really all that challenging technically, is well suited to price reduction through mass production, and the additional processing power required to handle the extra bandwidth is probably available these days. It meshes very well with high-definition and can generate the sort of "wow" factor (especially with games!) that they want to sell new equipment.

[It seems like the main technical issue would be a video format. For still images, we've got excellent formats like EXR [openexr.org] (read Greg Ward's excellent comparison of HDR image encodings [anywhere.com] ), but I'm not aware of any reduced-bandwidth formats for HDR video (AFAIK, film companies tend to simply store every frame as a separate EXR image).]

But noooooo, "it's 3d for you, sonny, like it or not"... :/

Re:More interesting than 3D? (1)

Zelgadiss (213127) | more than 3 years ago | (#34924084)

HDR screen I heard are stupid expensive though.

Current LCD TV tech has enough trouble getting the contrast up and decent blacks.

I have no idea what exotic tech those high-end professional HDR screens use.

Re:More interesting than 3D? (1)

Stooshie (993666) | more than 3 years ago | (#34925540)

HDR doesn't need special screens. All the work is done at the recording end. You can watch HDR on your computer screen right now. (http://www.youtube.com/watch?v=BlcLW2nrHaM)

Re:More interesting than 3D? (2)

penguinchris (1020961) | more than 3 years ago | (#34925654)

Yes, but... that's not what it actually looks like to our eyes. Same with HDR still photography, which I've dabbled in a bit. The actual video or photograph may be HDR, but in order to display properly on an inexpensive screen, the depth is compressed. Thus, you need expensive screens to properly see. That's not to say you can't get impressive results that are indistinguishable from the real thing to the untrained eye, but the ability to truly display HDR video has the potential to blow people away - it'd be able to trick our minds into thinking it's a window into another world, even without 3D... and combined (assuming the 3D is done well) it'd be astonishing.

Re:More interesting than 3D? (0)

Anonymous Coward | more than 3 years ago | (#34925356)

Somebody make a HDR Bladerunner version, where we can actually see what the fuck is happening.

Now we just need... (-1)

Anonymous Coward | more than 3 years ago | (#34921210)

Now we just need a video system that can actually capture CmdrTaco's micropenis without an expensive and ultra-powered microscope.

Not so fast there son (5, Insightful)

Anonymous Coward | more than 3 years ago | (#34921298)

Didn't a pair of guys do this last year using a pair of DSLR's and a beam splitter?

Also ,unless someone is building a HDR display this is all pretty academic, HDR images have to have their range compressed and then tone mapped in order to be displayed via conventional means, this is normally terribly unsubtle and results in an image that looks not entirely unlike it was rendered using 3d modelling. If we are going to see another big shift in display (read: TV) technology in the next decade I would much rather we moved away from the sRGB / YUV colour space than started fucking about with HDR content, what's the point of trying to take advantage of our eyes exposure latitude if we can only render 1/3 of colours?

Re:Not so fast there son (1)

UDChris (242204) | more than 3 years ago | (#34921398)

Mod parent up. I'd rather have a revolution in home entertainment tech than another "filmed in high-def, compressed to 480i for the masses" such as the one we're currently digging out of. I still have 76 channels of standard def, even though the cable company pretty much requires you to get a box in my area, which allows for digital-to-analog conversion.

I'd rather have a revolution in tech that's so revolutionary you have to adopt, up and down the line, to be able to use it at all. Unfortunately, my pipe dream is interrupted by the economics of incremental adoption, but I can dream.

Re:Not so fast there son (1)

Nialin (570647) | more than 3 years ago | (#34921454)

We need a launch pad for better visual technology. This could be crucial for future photographic endeavors.

Re:Not so fast there son (2)

UmbraDei (1979082) | more than 3 years ago | (#34921848)

Correct, it's been done before using 2 Canon 5D Mark II's, as seen in this clip: Soviet Montage Blog [sovietmontage.com]

Re:Not so fast there son (1)

Anonymous Coward | more than 3 years ago | (#34921854)

read article for details of HDR display. thx

Re:Not so fast there son (1)

scdeimos (632778) | more than 3 years ago | (#34922912)

Lack of details, you mean. Sounds like BrightSide tech to me (i.e.: a multi-zone LED backlight through normal LCD panel).

Re:Not so fast there son (4, Informative)

MightyMait (787428) | more than 3 years ago | (#34921886)

If you read the article, the author mentions both the folks in SF who did HDR video previously as well as the fact that the Warwick team have indeed developed a new HDR display for their system.

Also mentioned is that the new system generates 42GB/minute of data to capture images with 20 exposures per frame. My backup nightmare just grew large fangs!

Re:Not so fast there son (1)

smallfries (601545) | more than 3 years ago | (#34926540)

I wouldn't take the article as gospel as it appears to repeat what the Warwick guys have claimed. But of course they did not develop the camera that they use. It is in fact an existing commercial product that they use off-the-shelf.

Details (from the real developer) can be found here [spheron.com] .

Re:Not so fast there son (0)

MichaelKristopeit404 (1978298) | more than 3 years ago | (#34921906)

slashdot = stagnated

Re:Not so fast there son (3, Informative)

zeroeth (1957660) | more than 3 years ago | (#34921922)

Speaking of 3D rendering, most of them output HDR which would be awesome to see without being tone mapped!. (LuxRender has built-in Reinhard tone mapping) And since we are on the topic of HDR.. this is what it is NOT http://www.cs.bris.ac.uk/~reinhard/tm_comp/flickr_hdr/The%20Problem.html [bris.ac.uk] (Reinhard discusses the blown out tone mapping heavily prominent on flickr)

Re:Not so fast there son (1)

Em Adespoton (792954) | more than 3 years ago | (#34922360)

It seems to me the answer is pretty obvious; all we need to do is bring back interleaving, with alternate bracketed frames for alternating lines. Ramp FPS up to 60, and you should get something approximating HDR video on current technology displays.

Re:Not so fast there son (0)

Anonymous Coward | more than 3 years ago | (#34922468)

You don't need to have an 'HDR Display' to show the video numbnuts. For being so 'academic' you don't really have a grasp on what you're talking about. HDR images don't have to be processed the way you said because of limitations of conventional technology. They need to be processed to combine areas the would typically clip - be so bright or so dark that there would be no detail, and combine those with the midtones of what would be considered a properly exposed image. Additionally, we aren't talking about combining images, we're talking about an HDR video camera which more than likely has multiple processors to increase the number of stops in its range to more closely mirror the 32 stop range of the human eye.

Once again, we aren't talking about images so your mention of the cartoony/3d moderling look is not likely to play into this equation, nice try.

In reference to your last statement, increasing exposure range has nothing to do with being able to display color. We're talking about details that would be lost using a typical single sensor camera; enabling us to create video content that is similar to how we would perceive a scene if we were there ourselves.

I'm sure this topic was obviously so elementary that these details weren't worth considering...

Re:Not so fast there son (0)

Anonymous Coward | more than 3 years ago | (#34922674)

"HDR images don't have to be processed the way you said because of limitations of conventional technology. They need to be processed to combine areas the would typically clip"

Did you even read what you wrote, or the post to which you are replying for that matter? Your confusion over a displays dynamic range, and a desire to move away from the standard RGB / YUV colour spaces suggests not.

I would also suggest that increasing the number of CCD sensors in a camera would probably be more effective in increasing its exposure latitude, rather than the number of "processors"

Re:Not so fast there son (1)

ZosX (517789) | more than 3 years ago | (#34923474)

Clearly you don't know much about HDR. You can also use HDR to create very natural looking images, and in fact, this is one of its primary uses. Also you don't need an HDR display because any HDR you see on the web is presented to you in boring 8-bit jpeg for the most part. Most screens cannot adequately display the 32-bit color necessary for true HDR to really shine. That being said, even at 8- bit jpegs, HDR will typically give you a much larger dynamic range than what most sensors are capable of. One day, hopefully, camera sensors and display technology will evolve to 32-bits and all this HDR stuff will seem like old hat when you can get the same results in just one shot.

Re:Not so fast there son (1)

NibbleG (987871) | more than 3 years ago | (#34923484)

Yeah, they did at least six months ago... That was when I saw it... www.youtube.com/watch?v=BlcLW2nrHam

Re:Not so fast there son (1)

Jarik_Tentsu (1065748) | more than 3 years ago | (#34925140)

Even if you don't have a finished HDR product, being able to edit in HDR is amazing. Yes, it would be great to have HDR displays, but even without them, say...trying to much with brightness in a pic. You want say, a bright sunny day, that's resulting in washed out colours to look better. You can mess with the colour curves, but depending on your video, this can look horrible.

With HDR, just grab the brightness slider and pull it down. It'll look great.

I've used 32bpc before, but in a really round about way. Duplicate a video track, bring out purely the highlights, delete any highlights that aren't actually bright, then add them to the original track in 32bpc space. It gives nice motion blurs (the 'lights' have stronger blurs than the shadows) and all that.

Re:Not so fast there son (0)

Anonymous Coward | more than 3 years ago | (#34927534)

There's a team in vancouver that's been doing HDR flat panel tvs for at least 5 years. I know I saw a demo about 3 years ago.

Cool stuff (2)

Monkeedude1212 (1560403) | more than 3 years ago | (#34921330)

I first learned about HDR from Valve, during one of the developer commentaries on one game or another... (Lost coast maybe? Anyways) They were trying to explain how Bloom is done in video games, and certain other effects like how walking out of a dark tunnel to bright light will affect your vision for a tiny bit, as your eyes need to adjust to the new lighting conditions.

Thats when I started looking it up and yeah, basically the idea is that you take one shot that is under exposed (dark), one shot that is over exposed (light) and one that is properly exposed, and as many more in between as you want. Then you feed it all into a bit of software which takes the richest colours and lighting conditions from each photo and imposes them into one single image, so the dark corners remain dark and the bright lights remain bright and the vivid colours are still vivid. Its quite cool stuff.

I'm a little curious as to how this is working, is it managing to encode the HDR real time into it's range compressed and tone mapped beauty at least 24 frames per second, or does it merely record the 3 or more images simultaneously and then take a few minutes afterwards to do the encoding? The first I think would be more impressive, but not really necessary.

Re:Cool stuff (1)

Protoslo (752870) | more than 3 years ago | (#34921484)

It takes 20 images for each frame, at 30fps 1080p. You combine them yourself in post with the help of special software that can also apparently deduce the location and intensity of various light sources, allowing you to add rendered objects into the scene with realistic lighting.

Re:Cool stuff (1)

funwithBSD (245349) | more than 3 years ago | (#34921544)

Great, they reinvented Kodacrome.

Sony camera's do this real time for still photos on consumer grade electronics (the sub $500 A33 for example) so I am thinking that the same can be done real time for movies.

Some HDR techniques don't require 3 photos, they can extrapolate from a single exposure.

Re:Cool stuff (3, Informative)

nomel (244635) | more than 3 years ago | (#34922760)

Unless you have an HDR screen, this would require an automatic tone mapping. The thing about automatic tone mapping is that you have to decide what intensity information to throw out since you only have 256 values that you can display. For instance, using a 14 bits per color channel canon DSLR sensor, if you want to look at the image on your screen, this means you'll have to thrown out 98.4% of your intensity values. It is extremely important which values you decide to throw out, especially considering there's usually a subject or subjects in a photo that you want to keep visible.

By the way, this 14bits gets you about +/- 2 stops...the camera they're talking about gives you 20 stops...that's an *incredible* amount of intensity information (giving the file size). Really this is more of a solution for filming a scene once and not having to worry about if you camera exposure is set correctly, which *is* extremely valuable.

Now, viewing HDR movies? Not in theaters with any sort of current projection technology with reasonable ticket prices. The projection bulbs would have to go up probably 20 times in brightness, keeping similar crappy projection theater black levels. And, how do you deal with the ambient light coming off of your now incredibly bright white screen and bouncing off of the audience? At home, do you really want a tv that bright? From this bit-tech [bit-tech.net] review, "The light from the box was so bright, or indeed, was of such great contrast with the surrounding area, that it almost hurt to look at.".

Re:Cool stuff (4, Informative)

The13thSin (1092867) | more than 3 years ago | (#34923792)

I would like to correct a mistake prevalent here and in the news summary: common camera's do NOT get 1 (or 2) stops of light information (the difference between black and white). In fact, camera's like the Canon 7D have about 11 stops of dynamic range [source] [clarkvision.com] and professional video camera's like the Red One have about 13 1/2 stops of difference between black and white [source] [red.com] . Still, as X stops means 2^X times the light difference, going from 13 1/2 to 20 stops is a pretty huge deal.

Another misconception: the amount of bits per channel only indicates precision, not dynamic range. Of course, when the researchers in the article created a 20 stops camera, they needed much better precision to get similar quality in the same range as the current camera's, which leads to the quoted 42 GB per minute uncompressed video stream.

(Please note: DSLR camera's like the Canon 7D can detect and save more dynamic range than is apparent from the JPG's they create and the extra information is saved in the RAW file, which allows you to change exposure settings at least 1 stop in post processing without (noticeable) drop in quality.)

Re:Cool stuff (4, Interesting)

Doogie5526 (737968) | more than 3 years ago | (#34921844)

I find it kind of funny that HDR means the opposite thing in photography versus video games
http://img194.imageshack.us/img194/7391/1244894383293.jpg [imageshack.us] (pulled from some old digg post)

Traditionally games render the world and keep it between 0 and 1 (zero being black/completely dark and 1 being white). HDR is computing values above and below and clipping so things that are blown out (like reflections and highlights) are super white. I think it was an update to Half Life 2 that first did this in a commercial game.

In photography, they take multiple exposures and stick them in to an HDR image. Then, they use tone mapping to convert it to an 8-bit visible image. Tone-mapped images are generally called HDR, even though that's a misnomer.

Re:Cool stuff (0)

Anonymous Coward | more than 3 years ago | (#34925774)

I guess the difference is that in HDR photography, the image is HDR; while in computer games, the world of the game is HDR, while the image on the screen isn't.

As a hobby photographer... (0)

Anonymous Coward | more than 3 years ago | (#34921370)

I don't see what the problem is. In most cases, HDR recording technology is completely unnecessary, at least if you use a state of the art digital (still) camera. I've taken pictures of scenes with direct sunlight and dark shadows and with just one exposure, there's good detail in both areas of the picture. The scenes which a modern digital camera can't capture in one exposure are quite extreme and not at all what HDR techniques are typically used for. However, if you print these properly exposed pictures with plenty of detail, or turn them into JPGs unedited, either the highlights are blown out or the shadows are purely black or both if you want a realistic overall brightness. The problem isn't the sensor, it's the display. The dynamic range of a high quality print or a computer screen is tiny compared to that of the camera, and so you need to apply copious amounts of tone mapping to make all the highlight and shadow detail available at the same time. But that is a post-processing step and doesn't need new technology, just better algorithms.

Re:As a hobby photographer... (2)

moonbender (547943) | more than 3 years ago | (#34922664)

Well, there are situations where you need multiple exposure to avoid clipping some details. E.g. images like this one: http://hdrcreme.com/photos/23496-Entrance-of-a-Small-Church [hdrcreme.com] There is no way a consumer camera can get the detail of the interior of the dark room (and the colorful window) as well as the bright outsides.

Re:As a hobby photographer... (0)

Anonymous Coward | more than 3 years ago | (#34924956)

That looks fucking awful. Yes, exposure blending is occasionally a valid technique, but that sort of tonemapping is just absolutely stupid. Contrast is good. Images should remain interesting in greyscale.

Re:As a hobby photographer... (1)

moonbender (547943) | more than 3 years ago | (#34926286)

I was posting the image (which isn't mine) to make a point, not to praise it. That said, I think it still looks pretty interesting in grayscale. In some ways I actually like it better desaturated.

actual link (0)

Anonymous Coward | more than 3 years ago | (#34921474)

Here's the Warwick press release [warwick.ac.uk] complete with a (non HDR) video

So how about some decent framerate? (4, Insightful)

NeutronCowboy (896098) | more than 3 years ago | (#34921482)

I mean, we have 1080p 3D stereovision with full-micron surround color effects, and yet, movies still stutter like mad on a fast pan because that damn 24 fps capture rate just can't keep up. Is it really so much harder to capture 60 fps and encode than it is to do a working 3D effect? I'd pay more for movies that have reliable framerates in the 60 Hz range than I would for 3D.

Re:So how about some decent framerate? (4, Interesting)

Doogie5526 (737968) | more than 3 years ago | (#34921718)

Roger Ebert asked the same thing (on page 4)
http://www.newsweek.com/2010/04/30/why-i-hate-3-d-and-you-should-too.html [newsweek.com]

I think there's a couple reasons. The first, and probably most significant, is nostalgia by film makers. They love the motion blur of 24fps. It helps evoke the "feeling" of film. Every film student I know either wants to shoot or convert their footage to 24fps. There is a noticeable difference. When you start increasing the resolution and frame rate, you lose motion blur and it starts to look like home video or video games (when generally don't compute motion blur at all).

Another big issue is the amount of light. When you have more frames in a second, each frame has less light to suck up. It's a big issue with high-speed film. Having sensors that are more light-sensitive is a fairly recent thing (combined with advanced noise reduction) and will continuously get better.

The stuttering is something cinematographers keep in mind when shooting (or at least, they should). I read an article about shooting imax and they said the biggest problem was the stuttering. They're also using 24fps, but the screens are much larger. When you pan, the object could jump 2 to 3 feet per frame. They intentionally had slower pans to compensate. You noticing this is probably a side-effect of larger theatrical screen and larger tvs at home.

Re:So how about some decent framerate? (0)

ShooterNeo (555040) | more than 3 years ago | (#34924252)

Motion blur can just be re-added digitally while retaining 60 fps. And if you don't project film, the light intensity problem you mention is not a problem.

Re:So how about some decent framerate? (1)

Anonymous Coward | more than 3 years ago | (#34925274)

I believe re-adding motion blur is harder than you think. The 2d motion blur techniques I've seen basically do a smearing, which looks terrible on composited images (and looks ok on individual, pre-composited elements--as long as it's a up/down left/right motion and not toward or away from the camera). Better algorithms may exist, but the processing power necessary would be prohibitive. It would also look different, which is part of the nostalgia filmmakers have. Spielberg still edits his films by cutting the negative. It'll take a whole generation to transition. And Cinematographers, who are more attached to these things, have longer careers and dont seem to hit their stride until they're 45 years old, or so.

Regarding the light intensity problem, you either misunderstand or misread my statement. It's not a matter of the presentation (such as projection). If your shutter time decreases, which is what has to happen if you increase your fps, while keeping the same aperture, your only other variable is ISO. High ISOs are notoriously grainy and also lose saturation. Film it kind of plateaued around 1600 ISO. Digital has exceeded this in recent years by quite a bit while keeping grain down. I would expect they would look to reducing the size and expense of lighting equipment before jumping to a different frame rate.

Re:So how about some decent framerate? (0)

PhrostyMcByte (589271) | more than 3 years ago | (#34921766)

Higher FPS would make motion a lot sharper (less motion blur), but it won't solve the stuttering. Stuttering is caused by some CMOS-based HD cameras (like the very popular Red One) having rolling shutter, where they capture the image in chunks instead of all at once. Find an old PC game, turn off vsync, and look at the nasty tearing you get when it tries to render 100fps onto your 60fps monitor in a rolling fashion. Same issue -- high FPS won't solve it.

Re:So how about some decent framerate? (0)

Anonymous Coward | more than 3 years ago | (#34922104)

That's not the "stuttering" the GP is talking about - the effects of a rolling shutter are something entirely separate. He's talking about the general lack of temporal resolution involved with anything shot at 50Hz.

Re:So how about some decent framerate? (2)

ChrisMaple (607946) | more than 3 years ago | (#34921986)

The same motion detection that video compression uses to reduce data rates could be used to interpolate as many frames as you wish between two existing frames. Goodbye stutter. As software projects go, it's not even a particularly difficult job.

Re:So how about some decent framerate? (3, Informative)

nomel (244635) | more than 3 years ago | (#34922280)

It's because it's not pleasing to the eye. 60fps movies look very strange...like home videos. The 24 fps is what gives them that "movie look". If you look at some example vids from some of the newer consumer cameras that can do 24 and 60fps...you'll see the huge difference it makes.

Re:So how about some decent framerate? (2)

futuresheep (531366) | more than 3 years ago | (#34922812)

24fps has nothing to do with being pleasing to the eye. 16fps was common before this, Edison was pushing for 48fps. 24fps was a compromise that was chosen simply because it was the slowest, meaning cheapest, speed that allowed for good sound quality. http://en.wikipedia.org/wiki/Film#Technology [wikipedia.org]

Re:So how about some decent framerate? (1)

penguinchris (1020961) | more than 3 years ago | (#34927074)

That all may be true, but that doesn't mean that 24fps isn't pleasing - because it is! It may have just been a happy side effect, but it worked out quite well I'd say.

HDR storage not imagery (1)

whiteboy86 (1930018) | more than 3 years ago | (#34921572)

They claim

"more representative description of real world lighting by storing data with a higher bit-depth per pixel than more conventional images"

So basically they store the image with greater color resolution then the conventional 8bit-RGB -- they are not getting realtime over/under exposure passes to get HDR enhanced mixed output.

Already Done Before (1)

Plekto (1018050) | more than 3 years ago | (#34921618)

Two companies already dealt with this in the past, though they aren't doing as much with the technology as one would hope.

1 - Fuji has a sensor that does HDR already. Several years ago, in fact. It has two overlapping layers of sensors and takes two images at the same time, then blends that two together. Done, can buy it today. Just it isn't in use in a video camera yet(and their current camera doesn't do video very well)

2 - Foveon also has a different sensor approach where it is layered like film. This gives it the same dynamic range as our yes, or very close to it. It produces amazingly clean and beautiful images. But the resolution of the sensor is fairly low at about 4.5MP. Though, those are real full-color photosites (no interpolation or moire as ther is no "pattern" to the sensor), so it compares with a typical 10-12MP camera. Still, it's fairly low. Yes, their latest camera does do HD video, which is quite nice.

The OP can be forgiven, though, as these two companies spend almost nothing on advertising these two technologies. I'd wager that only one in 20 or 30 people who are interested photography and video (that I talk to) even are aware of them. A normal person just simply doesn't know at all.

Re:Already Done Before (1)

HateBreeder (656491) | more than 3 years ago | (#34921688)

The Foveon sensor is not an HDR sensor. It simply does away with the bayer filter by having layers in the silicon so that different wavelength penetrate different depths, the amount of light received is still proportional to the amount of charge the photosite will contain after the exposure is over.... so this still suffers from the dynamic range problem of standard single-color-per-photosite sensors.

Re:Already Done Before (1)

Plekto (1018050) | more than 3 years ago | (#34922866)

Well, that's somewhat true. A typical sensor will accurately capture all but about one F-stop higher and lower than human vision. That means that overblown super-HDR type movie poster or similar shots and the like aren't something that you should expect in any camera. Why it behaves like a HDR camera is because it doesn't hit 255 and just simply put solid white in the image when it gets over-exposed. A typical Bayer sensor will do this and there's nothing to be done about it - you hit that wall and you're done. There's nothing to recover at that point - the data just simply vanished.

The Fuji has two methods by which it does this. The first was the Super CCD SR with actually had dual sensors and took two shots at once at different f-stops and then blended them. The EXR version that they currently sell does pixel binning, which while it does create a slightly better dynamic range, it suffers from more interpolation issues (as one would expect) than a typical sensor. It works, but it's less than optimal in terms of color fidelity.

http://www.kenrockwell.com/fuji/s5/dynamic-range.htm [kenrockwell.com]
This is an older article, and he's a bit of a tool (has a hard-on for Nikon and hates anything but landscapes), but 2/3 the way down you can see a great example of this in action. Also, the color charts at the bottom show how it (and also Foveon/Sigma) sensor deals with bad contrast. Red goes to pink, then to white as you'd expect. At least in a far more gradual manner, with very little yellow. This means that the parts of the image that normally would require HDR or benefit from them have proper color balance/truer colors.

The Sigma doesn't have better dynamic range than a typical camera, but it does the same trick that our eyes do, which is that when the cones in our eyes get saturated, they do it in a gradual manner. This results in pictures that look overexposed like film - a nice soft shoulder at the extremes that can be recovered or dealt with in software. Is the Foveon/Sigma HDR? Well, sort of and sort of not. How it deals with colors is truer and resolutions are identical for each color. So it operates more like HDR for chroma and less for luminance. This is easiest to see in a Bayer sensor where purples and reds look fake and grainy when next to strong blue or green sources. Typically, unless you shoot very low light or black and white, better color fidelity is always the better choice.

And these are not some prototype sensors or custom application technologies, either. They sold millions of these units over the years. 2003 for the Fuji Super CCD technology and around the same time for the Foveon as well. That they've done precious little with it in the last (almost) decade is really unfortunate.

This seems oversold (0)

Anonymous Coward | more than 3 years ago | (#34921708)

As far as I can tell, HDR just means capturing an image with higher color/luminance resolution, similar to going from 16 bit color to 32. It's not in any way a deep or fundamental change, just an incremental improvement. A nice improvement, to be sure, but the talk about exposure time and the difference between the human eye and a camera sensor is all beside the point.

Re:This seems oversold (0)

Anonymous Coward | more than 3 years ago | (#34921810)

It isn't just having more color resolution. Yes, maybe when it's all said and done, you can argue it's about more resolution. You can have 12 or 16 or 24 bits per channel of colour resolution in a standard camera and still not be able to get what HDR photography gives you. What you say is all beside the point IS the whole point of HDR photography due to various technical limitations of the cameras we know and love today.

Re:This seems oversold (1)

grumbel (592662) | more than 3 years ago | (#34922290)

Not quite. HDR is a very fundamental change because it doesn't just change the resolution, but the range. Currently we store and display images in a range from [0,1], 0 is black, 1 is white. We can store that range in 5bit or in 8bit per color and thus get a higher color resolution, but it will still just be the same range of [0,1]. Photographing a white paper and the sun both gives you a 1.

With HDR on the other side you move beyond that range, you basically switch from integer to float (or just integer with a different range mapping). Instead of [0,1] you get something like [0, 6.55 × 10^4]. Thus you can accurately represent the difference between looking at a white paper and staring into the sun, information that would have otherwise been completely lost.

As our displays will still only have [0,1] range, you have to tone map the HDR images down to that range, but the additional information you have allows a lot of post processing tricks that would otherwise be impossible.

Eyes really aren't that different. (1)

NicknamesAreStupid (1040118) | more than 3 years ago | (#34921744)

We just think they are. We change our focus and view, squint, shift our heads, and shade our eyes to avoid brightness to view dark areas. Video cameras can do most of that, too, plus they can zoom, something eyes lack. The problem is representing that view on a monitor, which does not have the dynamic range of the real world. Photographic prints that have HDR compensation may look surreal, and others look washed out in places. Video has the same issues. It takes a lot of post production to make it appear normal.

Re:Eyes really aren't that different. (0)

Anonymous Coward | more than 3 years ago | (#34922046)

There's some truth to that. One aspect of the "HDR problem" really is that a picture has to show the whole scene at one defined exposure level and present that to the viewer who looks at different parts of the scene in the picture, but the eye changes exposure levels while looking around the actual scene. On the other hand, the human eye really does have a significantly higher dynamic range (without adaptation) than a video camera and even most still cameras.

iPhone camera? (1)

multipartmixed (163409) | more than 3 years ago | (#34921748)

Is this the same HDR that the iPhone 4's still camera has?

Re:iPhone camera? (1)

WGFCrafty (1062506) | more than 3 years ago | (#34921892)

Heh. I think most photographes/videographers would be offended by that notion, unless you're being sarcastic. 8-)

Re:iPhone camera? (1)

joh (27088) | more than 3 years ago | (#34922696)

Heh. I think most photographes/videographers would be offended by that notion, unless you're being sarcastic. 8-)

Still, it's works basically the same way. The iPhone camera in HDR mode takes three exposures and combines them.

Modern cameras have _much_ more than one f-stop (2)

aacosta (607712) | more than 3 years ago | (#34921792)

Video cameras, however, have just one f-stop to work with at any one time, and so must make compromises

Just to name one example, the Red One has a wonderful dynamic range of 11.3 stops. http://en.wikipedia.org/wiki/Red_Digital_Cinema_Camera_Company [wikipedia.org]

Re:Modern cameras have _much_ more than one f-stop (1)

WGFCrafty (1062506) | more than 3 years ago | (#34921928)

I think the point is 'at any one time.' The range of the aperture is different, my Canon "L" glass can range from f2.8 to the upper 20s. You can't have a camera that has a range, as the aperture can only be a hole that limits the light. The only way you could have more than one at once is a camera which would have multiple sensors and multiple apertures. Don't know if that would even be possible.'

Re:Modern cameras have _much_ more than one f-stop (0)

Anonymous Coward | more than 3 years ago | (#34922130)

Dynamic range (i.e. the ratio of the brightest scene point which is measured to be just below white to the darkest scene point which is measured to be just above black) is measured in f-stops. An f-stop is a way of saying "half/twice the amount of light", so 11.3 stops means the dynamic range is such that (at the same time, in one picture) the light which causes a pixel of the sensor to almost saturate is 2^11.3=2500 times as bright as light which the sensor can barely detect.

Re:Modern cameras have _much_ more than one f-stop (1)

MightyMait (787428) | more than 3 years ago | (#34921940)

What the author is trying to say is that conventional cameras are set to one particular f-stop at any given time. Of course, the f-stop can be changed, but you're still only using one setting at a time. With HDR photography/videography, the same image is captured multiple times at different f-stops.

Re:Modern cameras have _much_ more than one f-stop (4, Informative)

muridae (966931) | more than 3 years ago | (#34922554)

The summary is just plain wrong, and the article may be as well. First, there seems to be some massive confusion between f-stops and dynamic range 'stops'. An f-stop is your aperture setting, and is part of the control that determines how much light gets into the camera. If I go out and desire to take an HDR picture of something, the f-stop is the last control I will use in setting each exposure. The f-stop has the side effect of changing the depth of focus, thats covered in photography 101. If you change that in a set of pictures, some things will be in focus in one frame, while out of focus in others. It doesn't look that nice once post processed.

On the other hand, a dynamic range stop is just notation for double the amount of light. If someone said "That film has about 9 stops of resolution" you would know that physically the brightest area on a picture would have 2^9 times as much photonic flux. Or you would be more camera focused, and know that the film would only record detail in the 4.5 stops above, and 4.5 stops below what ever you set the exposure for. An object 5 stops brighter than what you were focused at would be a washed out blur, and something 5 stops darker would be total shadow. A quick run through google suggests that Kodachrome, the legendary film, could record only about 8 stops dynamic range. The human eye can pick up something closer to 24 stops. GP's Red camera records 11.3 stops. Some people will claim that a digital camera gets as many stops as bytes, but that is only with the analog to digital conversion is logarithmic, and so is the display it is shown on. Mine runs about 7 stops, depending on other settings.

So, what's that got to do with this camera? I suspect what the article meant to say is that the camera captures 20 stops of data at 30fps. Better than the Red, better than almost any film in existence. It is doing the same thing in a single shot that other cameras do in several. All that will mean is less blur in HDR video, since subjects won't move irregularly between exposures. One would still have to tone-map the output down to a range that it can be displayed for printing, projection, or dvd.

Re:Modern cameras have _much_ more than one f-stop (0)

Anonymous Coward | more than 3 years ago | (#34922834)

mod parent up

Re:Modern cameras have _much_ more than one f-stop (1)

Plekto (1018050) | more than 3 years ago | (#34923030)

Which brings me to the real point of this insanity surrounding "HDR".

Q: Why do we actually need HDR?
A: Because today's sensors for the most part use technology that makes the images look like junk in poor lighting conditions. Specifically, the colors are terrible and there's very little ability to deal with over-exposures. The range between "perfect image" and "junk" is a very small margin with not much leeway or forgiveness between the extremes.

The response so far has largely been to increase dynamic range to silly levels, but that's really useless if the colors are inaccurate a a result (or more likely in spite of it). The fundamental issue remains unresolved. GIGO, in other words.

What we really need is sensor technology that deals with colors in a better manner. So that when we are faced with instances where the shot would normally look bad or washed out, the sensor can deal with it and correct for it like our brain does. Fuji and Sigma have tried but their attempts have been so-so. We need to keep exploring new technologies like this, IMO, rather than just doing more and more tricks with software.

Re:Modern cameras have _much_ more than one f-stop (1)

muridae (966931) | more than 3 years ago | (#34926420)

Which brings me to the real point of this insanity surrounding "HDR".

Q: Why do we actually need HDR?

You could just the same answer that it is because displays suck, and can not reproduce the range between dark and light that exist in the real world. Each weak spot must be conquered at a time. Really, how much marketing has gone into convincing people that HD is the furture, when we are stuck with displays that still use the sRGB color space. Yeah, it will waste bit space for imaginary colors, but getting a display that can show saturated greens and yellows would be great.

The other point to the hyper-saturated HDR images that are popular right now is just artistic style. Neo-impressionism.

just one f-stop? try 10-11 (0)

Anonymous Coward | more than 3 years ago | (#34921812)

Not even the worst webcam has "just one f-stop" of dynamic range. Most DSLRs and video cameras based on them have 10-13 f-stops of dynamic range. That the article poster (or TFA, I haven't read it of course) got this basic bit of information wrong makes me less interested to read the rest.

Re:just one f-stop? try 10-11 (1)

aacosta (607712) | more than 3 years ago | (#34921870)

I wouldn't say that most cameras have a "real" dynamic range of 10 f-stops (the upper and lower stops are usualy just a result of interpolation and thus cannot really be used in practice). Only top-notch ones have 10-11 f-stops (a average DSLR camera does not have more than 6 usable stops).

However, your point is clear, definitely much more than one.

f-stops are bunk (0)

Anonymous Coward | more than 3 years ago | (#34922060)

All the talk of f-stops is bunk anyway. f-stops are relative and very much subject to what the user finds 'usable'. Not to mention that f-stops are a function of aperture, whereas the f-stops referred to on digital cameras / the bodies rather than the lens are unrelated to the aperture.

Much better metrics would be minimum acceptable light level (the point at which you detect light without it being a noisy mesh like the ISO128000* modes on P&S cameras), the maximum light level the sensor itself can handle (without electronic blooming, leaking, streaking, etc.), and how many bits the information between those two can be read out as. Good luck even finding that for cameras outside of scientific fields, though.

So we're stuck with old 'analog' photography terms and people will continue to be awed by cameras with 14 f-stops of which only 5 or so are usable as you mention, similar to 24MP cameras which only reach an optical resolution (by combination of the sensor, aperture, lens, etc.) worth 16MP at best and ISO12800 settings which use an outdated term and the noisy results are inferior to taking the picture at ISO3200 and boosting it in post (presuming RAW).

Re:f-stops are bunk (0)

Anonymous Coward | more than 3 years ago | (#34922272)

Dynamic range is a ratio, so it's by definition a relative metric. One f-stop means that with any exposure, shutter time and lens combination, the sensor will saturate at twice the amount of light which produces the lowest non-black value. In other words, if you take an almost white picture, stop the camera down by one f-stop and take another picture of the same scene, then getting an almost black picture means that the camera has a dynamic range of 1 f-stop. To do this with a modern camera, you have to look at the RAW data, because they have a little head room above white and even more reserve below black compared to what they put into JPGs.

Re:f-stops are bunk (0)

Anonymous Coward | more than 3 years ago | (#34923476)

if you take an almost white picture, stop the camera down by one f-stop and take another picture of the same scene, then getting an almost black picture means that the camera has a dynamic range of 1 f-stop.

If you stop an aperture down by a single f-stop, then the amount of light hitting the film/sensor is halved, not shifted down (or up, depending on how you look at it). You'd get (ideally anyway) a 50% grey.. not (near) black.

Hence the number of 'f-stops' actually depends on the lowest amount of light that still hits your target fidelity (e.g. 8bit = 256 (-1)) and how many times you can double that amount of light before the sensor is saturated.

Of course if you're willing to accept a fidelity where cranking up the deepest shadows so that you are left with bi-level (0 and 1) noise, and you have a 16bit sensor (65536 levels), then you can market the camera as having 15 f-stops of range.

This is not entirely unlike some monitor manufacturers' claims on contrast a few years back.. the relative numbers are completely useless without the absolute values associated with them.

Now where's my computational photography camera? Oh right, must milk the face recognition, smile-detection, 'HDR' shooting mode and in-camera panorama stitching first. Just when we were getting somewhere with the Exilims.

Re:f-stops are bunk (1)

dfghjk (711126) | more than 3 years ago | (#34923970)

A system with 1 stop of dynamic range can only produce white or black, there is no "50% gray". If you use more than one bit per pixel with such a system, everything between 0 and maxpixel is noise. If it weren't then, by definition, the system would have more than 1 stop of range.

Re:f-stops are bunk (0)

Anonymous Coward | more than 3 years ago | (#34925286)

No, it means that your 'white' can be at most twice as bright as your 'black'.
It depends on the precision how many steps your have between 'white' and 'black'.

A 1-bit system which maps 0 to 1 and 1 to 1024 would have a DR of 11 stops.
The precision would be to low to represent that range though.

Given x bits to represent some value, you can represent a huge range with low precision, a small range with high precision or something in between.

Re:f-stops are bunk (2)

dfghjk (711126) | more than 3 years ago | (#34923952)

You defined a stop, not an f-stop. You are not alone in the confusion. There is a big difference between the dynamic range of the sensor, in stops, and the exposure settings necessary to utilize it. The author of the article doesn't understand that either.

Re:just one f-stop? try 10-11 (0)

Anonymous Coward | more than 3 years ago | (#34922318)

The dynamic range is measurable, and in fact dpreview.com has the actual hard numbers. Average DSLRs have 10 usable stops, better ones 12 or more.

That being said, I think the article poster is trying to say that cameras have the aperture open to only one position (what he calls f-stop, probably a combination of f-number and stops) at any given time and is not talking about dynamic range at all.

Re:just one f-stop? try 10-11 (1)

dfghjk (711126) | more than 3 years ago | (#34923922)

For some arbitrarily pessimistic definition of "6 usable stops" maybe so, but every DSLR EVER has had more than 6 usable stops by any reasonable definition. Ignorant photographers, most of whom are self-professed experts with a film background, like to say 6 usable stops based on a personal, arbitrary standard of having some number of usable shades of gray in the lowest "stop" without realizing that those shades define additional stops. Those fools, among them you, confuse stops and zones. 32 shades of gray in 6 "usable stops" is actually 11 stops of dynamic range. That is what is typically possible in today's DSLRs.

Re:just one f-stop? try 10-11 (1)

ThePeices (635180) | more than 3 years ago | (#34922310)

Not quite 10-13 f-stops.

A DSLR camera sensor captures 5 f-stops of dynamic range, both CMOS and CCD sensors, taken in RAW format. Even the latest EOS 1D series still has 5 stops to work with, down to the consumer class DSLR's.

What these guys have done is create a system that can capture 20 stops worth of dynamic range, which is fantastic. But TFA is light on technical details on whether these guys have created a new chip design or if multiple sensors are used.

But i love this choice quote from TFA:
"It consists of an LED panel which projects through an LCD panel placed in front of it. The combination of the two screens is necessary to provide all of the lighting information."

Note the emphasis on the word 'through', as if using an LED panel as a backlight for an LCD display is something new or special.

Hilarious.

This is not about aperture, nor about ISO (2)

RichiH (749257) | more than 3 years ago | (#34922032)

It does not matter that a camera can only have one aperture and one ISO setting. Our eyes have only one iris, as well. What matters is that our retinas & brain have a dynamic range that trumps CMOS/CCD sensors. Oh, and the fact that our eye cheats by seeing more colour in the middle of the retina and more bright/dark & movement in the corner of our eyes.

That being said, I am looking forward to anything that extends the dynamic range in both cameras _and_ displays.

Re:This is not about aperture, nor about ISO (1)

Malc (1751) | more than 3 years ago | (#34925176)

I'd also suggest that as our eyes look around a scene, they adjust for whatever we're looking at directly. Cameras don't do this localised adaptation so effectively need to have a greater range than the eye.

Dynamic (1)

sexconker (1179573) | more than 3 years ago | (#34922372)

The word "dynamic" has a meaning.
This is not an "HDR" display, nor does such a display exist, nor would anyone want one.

This is an "HR" display.

"The new system, by contrast, captures 20 f-stops per frame of 1080p high-def video, at the NTSC standard 30 frames-per-second. In post-production, the optimum exposures can then be selected and/or combined for each shot, via a "tone-mapping" procedure."

They're using the typical method of taking many exposures of the same frame. Makes sense.
I would hope they're using an image/video format that can store exposure data per pixel. (But I know they're not - they're using a separate file per frame that maps each pixel or macroblock to an exposure value).

"The final step in the process is the HDR monitor. It consists of an LED panel which projects through an LCD panel placed in front of it. The combination of the two screens is necessary to provide all of the lighting information."

So basically, the monitor is super fucking bright, and then it's made lighter or darker based on another monitor in front of it acting as a dynamic screen.
I don't care whether or not the system works well or not. (I'd have serious questions about compatibility with editing software, storage space, the viewing angle of a double-screen system, etc.) What I do care about is them using the range dynamic. If the range (of brightness) of the entire screen (every pixel) is the same, then it is not dynamic range. The range of the screen is simply [LEDS OFF + LCD CLOSED] to [LEDS ON + LCD OPEN]. Certainly way better than shitty, shitty, 24-bit color and the corresponding brightness range (humans care far more about luminance than chrominance, so RGB alone was a bad choice for color representation). But the word "dynamic" has a meaning. If the range doesn't change across the screen (and why would you want that), it's not dynamic. The brightness changes over time, but the range is fixed.

The dynamic in HDR photography means different sections of the image are exposed or level-adjusted differently from other sections of the same image. The range refers not to the display range, or the image format, but to the SOURCE MATERIAL.

There is no such thing as an HDR DISPLAY, and there never will be.

7 F stops (0)

Anonymous Coward | more than 3 years ago | (#34922444)

Pretty much the standard for a full frame DSLR sensor, Canon, Nikon, or Sony, within 10% or so. Saying the range of video is "1 Stop" is silly. In real life HD broadcast cameras, there are about 6-9 stops, with the RED system claiming 11+ stops of dynamic range.

Re:7 F stops (1)

dfghjk (711126) | more than 3 years ago | (#34923896)

All DSLR's sold today do better than 7 stops. You have to go back to the Nikon D1 to find something near 7 stops. And what does "10% or so" mean?

The system proposed claimed 20 stops which is preposterous. It was a BS article written by someone fundamentally ignorant of photography

HDR thats good (1)

ohiovr (1859814) | more than 3 years ago | (#34922588)

Now there is an HDR video camera when are we going to have an HDR video monitor? Seems kind of useless to have an HDR camera with no way to display it.

HDR motion picture cameras are already shipping (2)

aibrahim (59031) | more than 3 years ago | (#34922982)

RED has shipped its first two production EPIC-M cameras which has a feature called HDRx, which allows up to 18 stops of DR in a single exposure for every motion picture frame. It doesn't require a beam splitter or any other gadgetry.

Peter Jackson has a number of them he's using for the Hobbit. I think the latest Spiderman is shooting with it too.

It does that at 5K, which is 5120x2880 resolution.

As to comments that HDR is better than 3D, or that you don't need lighting... they are unfounded. You still need lighting to create the precise mood you want. The advantage is that you can now create that mood more easily in more lighting conditions. This is especially important in conditions that the film maker can not control. The first RED demo was a shot from inside a barn out the barndoor into the Arizona desert. The camera held detail in the shadows inside the barn and in the sky and on white surfaces in direct sunlight.

The normal solution to that lighting situation is to pour about a hundred thousand watts of lighting into the inside of the barn, hope nothing catches on fire and that you are close enough to the sun

Re:HDR motion picture cameras are already shipping (1)

dfghjk (711126) | more than 3 years ago | (#34924018)

HDRx uses two exposures. The exposures aren't merged, either, they are stored as separate tracks in the data stream.

The First? (2)

CoolGuySteve (264277) | more than 3 years ago | (#34923460)

Didn't Autodesk Toxik 2008 already do HDR compositing with RED cameras?

Not only that, didn't they sell it to real live customers?

This is not the first, it's not even notable, frankly

Re:The First? (1)

chichilalescu (1647065) | more than 3 years ago | (#34924622)

read the article. it is the first, and it is notable: they have one camera, that basically generates video with a higher number of bits per pixel; they then use a custom display that can properly render this richer information.
before them, as far as I know, everybody kept reducing information into the 24bit standard.

This is already available in a consumer product (1)

meeotch (524339) | more than 3 years ago | (#34923924)

o.k. - attention-grabbing subject line aside... I can't RTFA b/c it's slashdotted, so I don't know exactly what dynamic range we're talking about. But the much hyped Red Epic camera (sequel to the Red One) has full-motion HDR, and is shipping as of this month. Models with this feature range from around $10k to around $40k - so admittedly more prosumer than consumer.

It stores the extra data in a secondary video stream, so that you can tone-map in post. And apparently, it can be dialed up & down, so that you can trade off dynamic range vs. resolution (up to 5k) vs. framerate vs. compression. Pretty sweet.

I don't understand it. (0)

Anonymous Coward | more than 3 years ago | (#34924260)

Is not it very complicated as the anatomy of a human eye?

I only know processing jpeg files using tmpgencoder for creating foolish movies.

Re:I don't understand it. (0)

Anonymous Coward | more than 3 years ago | (#34924324)

is HDR as picking dynamically among default 4096-color tables of the decoder for each image or subimage, or created ones from the image's environment?

Aka, i got the idea of having one million of default 4096-color tables used from Worldwide Televisions in a megahipercompressed repository integrated in the video decoder (the encoder doesn't need it much due a waste of CPU cycles for looking for the optimal, it's CPU intensive time), :P

Re:I don't understand it. (0)

Anonymous Coward | more than 3 years ago | (#34924426)

Neither HDR image or HDR video is silver bullet.

Photographers know very well the problem of the lighting focuses and the uncontrolled defocusing that will affect to the resulting images.

If this same problem is bad for 2D, it will be worst for 3D.

http://ecologicyborg.blogspot.com/2010/10/los-futuros-computadores-ecologicos.html [blogspot.com]

I've seen this demonstrated back in 1998 (1)

XNormal (8617) | more than 3 years ago | (#34925154)

A CCD was operated at 100 Hz instead of 50Hz (PAL) and alternating frames had different exposure times. The two interleaved video streams were merged in real time into a high dynamic range image and then compressed into a standard dynamic range image where details could be clearly seen in both dark and bright areas.

This was connected to a videoconferencing system and worked very well when the room lights were turned off for a projector. You could see both the presenter's face and the projected image. A standard camera showed the projection as a white washed-out rectangle and the rest of the room around it was almost completely dark.

A lot of confusion about HDR. (2)

Rothron the Wise (171030) | more than 3 years ago | (#34925584)

HDR photos you find on the web are actually tone mapped photos. They were HDR when they were captured, or when different exposures were combined into a single image, but after that stage they were tone mapped in order to make all the details visible on a conventional display.

Tone mapping is something we may stop doing when we have proper HDR displays like in this article. A display like that will more closely resemble the real world, and tone mapping will be unnecessary because our eyes can handle high dynamic range images just fine.

The perfect HDR video system would be one where you could film inside of a dark cave and you would see everything on the screen after your eyes had adjusted to the dark, and when the camera moved outside into the sun the intense brightness of the screen would make you squint.

Cheesy tone mapped HDR photos make your eyes hurt for totally different reasons.

The eye has lousy dynamic range (1)

Anonymous Coward | more than 3 years ago | (#34925590)

"The human visual system can perceive a scene that contains both bright highlights and dark shadows, yet is able to process that information in such a way that it can simultaneously expose for both lighting extremes"

Completely untrue. Any decent DSLR should have better dynamic range than the human eye. When you look at a scene, you see the overall picture and it seems clear, but really at each instant you're seeing a small clear area in a detail and all the rest of the scene in much less detail. You don't notice it because they eye and brain adjust quickly to optimize light sensitivity for what you're paying attention to instant by instant--wherever you look, it looks good. The parts without so much detail you don't notice because you're not paying attention to them. You can resolve a lot of detail in dark areas when you focus on them, and you can resolve a lot of detail in lighter areas when you focus on them--but not at the same time. They eye can pull so much detail out of any scene because it's actually making a completely different, customized exposure for each little bit of the scene as it moves around. That's why a camera that captures the entire scene in one exposure has such a hard time competing--it's not that the imaging sensor isn't as good as the human retina, it's that it's actually doing a completely different thing. And that's the impetus behind HDR--taking a number of exposures optimized for different light levels and combining them so that each part of the scene is properly exposed, just as the eye does when it looks over a scene.

Load More Comments
Slashdot Login

Need an Account?

Forgot your password?