[opendtv] Re: 1080p @ 60 is Next?

  • From: Craig Birkmaier <craig@xxxxxxxxx>
  • To: opendtv@xxxxxxxxxxxxx
  • Date: Fri, 18 May 2007 11:15:14 -0400

At 1:04 PM -0400 5/17/07, Tom Barry wrote:
The resolution/contrast (MTF) model has always bothered me a bit. It seems to me there are at least 3 components that are somewhat independent, say resolution/contrast/precision.

Precision decreases when you go to a lower bit depth and also during quantization during encoding. But except in extreme cases it doesn't directly lower contrast but just makes the image less accurate and less pleasing.

I'm not very comfortable with the use of the term precision in this context. I agree that the difference between 8/10/12 bit sampling is a precision issue. But this is largely irrelevant as the only use of anything more than 8 bits in a compression system is for archival purposes or special venues such as digital cinema. * bits can do a very good job, if the source is not heavily quantized.

The use of quantization in compression is anything but precision. It is the introduction of ERRORS into the source in areas where the eye MAY NOT notice. If I do not quantize the source I will maintain the integrity of the original samples - I suppose you could call this the precision. But once i start to quantize, it is no longer a question of whether the actual sample is at the correct 8 or 10 bit value, but rather , whether it is anything close to the correct value. I have seen green pixels turn black inside of a DCT block when quantized too hard. And of course, there is the extreme case where an entire 8 x 8 block becomes one value.

It is important to understand the entire subject of entropy and entropy coding to fully understand what is important when pushing any content through a DTV emission system (or any compressed distribution system for that matter). More can often be less.

If you stress the encoder too much you severely distort the delivered samples. It is nearly impossible to recover from this. On the other hand, if you back off on the resolution, you gain two benefits:

1. By resampling you improve the area under the MTF curve and reduce the entropy in the images. This helps in the contrast area, which "can" create the perception of improved resolution.

2. You reduce the stress on the encoder, allowing the integrity of the samples to be maintained.

This absurd notion that the higher the resolution of the source, the better the viewing experience is where things go into the ditch.

I'll use your term - precision - in a different context.

In a DTV system it is VERY IMPORTANT to provide the best quality samples (i.e. good precision) to the encoder. It is equally important to provide adequate bandwidth to maintain the integrity of the samples through the encoding process. Only in this way can you provide delivered quality that is precise enough to undergo resampling in the display system for proper presentation.

High quality samples at a lower resolution will deliver a higher quality viewing experience than degraded samples at a higher resolution. This is true for ALL screen sizes.


Lowering resolution drives the MTF curve exactly to zero past a certain point. This probably accounts for the discrepancy between, say, the theoretical resolution of film and the observed much lower detail after telecine process since film has a long tailed MTF curve where much of the MTF value is off to the right in discarded areas.

During video encoding you can imagine that in any encoded 8x8 block you can reduce resolution simply by zeroing the highest frequency components, leaving more bits to represent the other lower frequency components. This will in turn likely reduce the contrast of adjacent displayed screen pixels as sharp edges need to be represented high frequencies. I've previously posted images here showing the results of this.

Not exactly. With MPEG-2 you do not have the ability to move bits where you need them inside of a single DCT block. You can only quantize based on the content of the block, first removing the highest order differences, then the smaller differences. It is important NOT to quantize the lower frequency coefficients, as they contain the most important information. The high frequency coefficient typically add only subtle difference, such as those in a gradient like sky or a color field.

The reality is that the FIRST THING that is discarded using MPEG-2 and AVC is the fine details. The trick is to limit the quantization such that the viewer does not notice this. And this is directly proportional to the screen size that the encoded content is displayed on. As you blow up the source you begin to see all of the information that is missing - i.e. you see the errors, rather than the precise values that were in the source.

Or you can attempt to represent all frequencies but quantize each value so you have fewer bits of accuracy, as most codecs do now, not counting loop filtering and such. This lowers precision but I'm not sure it lowers contrast as much until you get to the more extreme cases.

I'm not sure that this is accurate. AVC/H.264 has better tools to deal with the content of an encoding block, including a variety of modes to represent different kinds of content, especially H , V and diagonal gradients. But in the end you are still quantizing the highest frequency detail first.

In any case, you cannot deliver fine details if you quantize too severely.

One other area that is very important is noise. This is why oversampling is SOOOOOO IMPORTANT! Noise is entropy. It is more than a lack of precision, it is completely uncorrelated information. Resampling reduces the entropy in the image making each sample more precise, and thus easier to encode.

This is the reason that we rarely see compression systems use the full 1920 horizontal resolution of the 1080 line formats. There is simply too much noise at the highest frequencies. So we resample to 1440 to filter out the entropy.


I think there is sort of a sweet spot of balance between the two processes for any given material and bit rate but don't know of rigorous studies suggesting how to choose it. Some encoding utilities like Gordian Knot make multiple test passes to calculate resolution recommendations but I think maybe only wavelet encoding can currently attempt to really balance the two factors on the fly.

Any encoding system can deal with this. Where we get into trouble is throwing away too much image information. This is why it is important that an emission system have adequate headroom to handle peak bit rate requirements. This is not the case today -almost everything is over-compressed.

There are a variety of ways to help the encoding process.

The most important is allowing the algorithms to run to completion WITHOUT any short cuts.

Next is stream analysis to adjust I frame placement and GOP size to match the requirements of the content.

Unfortunately, for live broadcast encoding both of these are impractical.

So the fix is generally to use noise reduction at the input, and a low pass filter in a feedback loop with the encoder. The result is what i call modulation of resolution, as the complexity of the source changes.


Anyway, I don't think quantization or a reduction of precision is quite the same as lowering either resolution or contrast. It's a third factor.


I agree that is is a separate factor, BUT ONLY if the output bit rate of the encoder is NOT constrained. That is, you can set the quantization level for very good quality using variable bit rate mode and let it rip. But you will get a stream where both the average and peak bit rates vary over a wide range.

Unfortunately this kind of stream rarely works in a distribution environment where the channel bandwidth is fixed and other sources/applications are contending for those bits. Statistical multiplexing is a big help. But when you have only one source, and insufficient bandwidth to maintain constant picture quality, you get what you get.

The best way to avoid this situation is to resample to a lower resolution. This may or may not result in increased contrast depending on the source.

I guess you were right...

It's all about how much loss of precision you can tolerate.

Regards
Craig


----------------------------------------------------------------------
You can UNSUBSCRIBE from the OpenDTV list in two ways:

- Using the UNSUBSCRIBE command in your user configuration settings at FreeLists.org
- By sending a message to: opendtv-request@xxxxxxxxxxxxx with the word 
unsubscribe in the subject line.

Other related posts: