This is my first post, and I have not read everything in this thread, but I wanted to sign up and respond with two quick points.
Point the first, I agree in principle with Jeroen's argument for good tone-mapping even in the case of HDR displays. Any effort to exactly reproduce the real world in its full glory is bound to fall short, with limitations due to ambient light reflection off the screen being one very important limitation. (Ignoring concerns of safety -- mostly running into things, one of my favorite display concepts is the "VRD" method for laser projection directly on the retina <http:// www.hitl.washington.edu/projects/vrd/project.html> and <http:// www.microvision.com/wearable.html>.)
However, there are undeniable benefits associated with good control and deep blacks particularly for cinematic display. The litmus test for a sufficiently low black level is whether the black is a visual match to the unilluminated surround. Imagine that you have some bright, even glaring image information in the upper right of your display. Scattering in the lens and iris of your eye will cause other portions of the retina to experience a "veil" due to this bright source, even when you avert your focus to the lower left of the screen. Even so, humans evolved to deal well with this situation, as we have faced it almost every day for the past 30 million years. If the lower left of the screen was meant to be black, but due to limitations in the projection system and stray ambient light in the theater, it is not quite black but rather lighter than the dark curtains next to it, our eyes will pick this up. The limit to our perception of black in the presence of bright sources with a natural (i.e., fractal) distribution of luminance corresponding to the real world is about 4 log10 orders of magnitude, give or take a factor of 10. This is considerably better than the simultaneous contrast capability of conventional display systems, despite what the spec sheets may claim. Furthermore, where I eyes can detect the black level, they can also detect details in the shadows at that level. For this reason, it is nearly worthless to have a black level that is much darker than next (quantization) step up in the control scale, yet still visible to the viewer. This improves the contrast ratio on the spec sheet, but with no real benefit to the end user.
Point the second, there is at least one image format that does what Tom Barry suggests in encoding only the dynamic range present in the original image, and I happen to know about it because I worked on it:
That's all for now. -Greg ------------ In response to August post ------------- From: "Donald Koeleman" <donald.koeleman@xxxxxxxxxx> To: <opendtv@xxxxxxxxxxxxx> Date: Thu, 10 Aug 2006 04:46:08 +0200Well, one of the recommendations they have on their website says something
like the next best thing since sliced bread for gaming;-).Any ISF calibrator wouldn't get paid if he made everything blue like those
folks at NASA. Luminance and there details seems much enhanced on theexample pic, but it is also all blue, so colour is seriously compromised.
----- Original Message ----- From: "Jeroen Stessen" <jeroen.stessen@xxxxxxxxxxx> To: <opendtv@xxxxxxxxxxxxx> Cc: "Charles Poynton" <poynton@xxxxxxxxxxx> Sent: Monday, August 07, 2006 10:51 AM Subject: [opendtv] Re: News: High Dynamic Range imaging Hello ! Tom Barry wrote: >> I've wondered for some time now if it would be better to >> use some non-fixed color space where that S-curve, log >> curve, or whatever was parametrically defined for each >> frame, based upon the probability distribution of the actual >> values needed. The actual values used for each frame >> would be determined at the time of down converting from >> some higher bit space, say in the camera or telecine >> machine. That's exactly what I once proposed to Prin, after his presentation on Digital Cinema. My exact words: At the end of the day a gamma function is just a quantisation table. There is no reason to choose a fixed function and then waste many quantisation codes on values that are never used. You need the low values for the dark scenes and the high values for the bright scenes, but not at the same time. I hope that nobody is suggesting that we use HDR displays (like BrightSide's) for entertainment purposes ? See: http://www.brightsidetech.com/ Because my non-HDR eyes will protest ! Dust in my eyeballs, dust on the projection optics, dust in the theater, and any other causes of flare, they will all cause the black level to be raised to dark grey and all details to be lost. There is no point in trying to render deep blacks if they never reach your retina. Better to compress the dynamic range to between 1000:1 and 100:1 and preserve at least the details in the blacks. For example with an algorithm like Nasa's Retinex: http://dragon.larc.nasa.gov/ It is okay to capture data in HDR, but it's not okay to present it in that form to the human eyes. It will be like driving with the sun in your face and a dirty windshield. Not a pleasure ! Bert Manfredi wrote: > What seems to be missing from this thread is that the 8 or 12 or whatever> bits used to quantize the light samples are not linear with intensity,
> right? I mean, in practice (although they could be made linear, I suppose.)> As intensity increases, the coarseness of quantization increases by some
> power factor -- Gamma correction. So things aren't quite as bad as linear > coding. Exactly. Gamma functions are used for video, and log curves sometimes for professional applications. This is necessary for not wasting too many codes on shades of white that can not be distinguised anyway. But it is still wasteful... You need more bits for coding variations between scenes than for variations within a scene. Going to 12 or more bits just because in a dark cinema you can better distinguish details in the dark scenes is wasteful. If there were a bright object in the same scene then you could not look in the dark anymore, so you could give some coding values to white and take them away from the blacks. So this obviously calls for a dynamic coding, a variable range. I would like to take the concept of "perceptual coding" one step further, and include the dynamic adaptation of the eye... > Doesn't ATSC use some gamma correction defined in ITU-R BT 709? Officially yes. Unofficially they will use whatever correction is necessary to make the picture look good on the studio monitor...> Charles Poynton sez that if you do gamma correction before quantizing,
> even 8 bits per sample are actually okay. That is only very conditionally true ! Three major conditions: - you need a typical viewing ambient, not too dark, and - you need some analog noise on the signal before quantisation, and - you can't do any cascaded operations on the 8-bits signals ! (Operations of the type that introduce more quantisation noise.) If you had a digitally rendered signal and quantize it to 8-bits on a Rec.709 gamma function and then view it in a dark cinema, I am certain that the quantisation of the blacks will look terrible. The toe gain of 4.5 is way way too low for preserving the blacks. 8-bits D1 was good for taping noisy analog video signals, it is not enough for a complete digital video chain. > That whole presentation is pretty interesting. He says that linear > 8 bit quantization is not nearly enough. That is very true, however, his "nemesis" Timo Autiokari also makes a good point of why the gamma domain should not be used for some (many ?) types of signal processing: http://www.aim-dtp.net/aim/evaluation/gamma_error/index.htm Did you know that Charles is fully employed these days ? Best, -- Jeroen ---------------------------------------------------------------------- You can UNSUBSCRIBE from the OpenDTV list in two ways:- Using the UNSUBSCRIBE command in your user configuration settings at FreeLists.org
- By sending a message to: opendtv-request@xxxxxxxxxxxxx with the word unsubscribe in the subject line.