[opendtv] Re: Video compression artifacts and MPEG noise reduction

  • From: Tom Barry <trbarry@xxxxxxxxxxx>
  • To: opendtv@xxxxxxxxxxxxx
  • Date: Tue, 17 Jun 2008 21:24:23 -0400

Personally I think most of the clever noise reduction algorithms just end up removing high frequency detail. And once that is done it would be a better bit savings to just resize and compress the picture to a lower resolution.


Sadly HDTV resolutions are standardized on just 1080i or 720p so they can't do that (continuously lower the rez) and instead send the full resolution after removing the higher spatial frequencies that would actually justify using that size.

- Tom

Craig Birkmaier wrote:
At 11:53 AM -0400 6/16/08, Manfredi, Albert E wrote:
I re-read their original 2006 white paper, which I posted initially, and
their product description and its user's manual. And you're right that
the VNR-1000-HD Dual Video Noise Reducer is a post-processor of MPEG
compressed video, but I continue not to understand why you think it
doesn't apply to constructing a broadcast multiplex.

What does this mean to you?

"The VNR-1000-HD dramatically increases picture quality and/or reduces
bandwidth requirements for delivering content through cable, satellite,
terrestrial networks, and IPTV delivery platforms."

It sounds like the description of a video pre-processor that can deal with various problems that can impact the quality of a source when it is compressed using any digital video compression technique. From page 4 of the manual that you cite below:

With effective removal of various types of digital noise and compression
artifacts, Algolith's VNR-1000 processing cards will greatly enhance
image quality and at the same time substantially reduce the bandwidth
requirements.

Main Features
* Mosquito Noise Reduction
* Random / Gaussian Noise Reduction
* Block Artifact Reduction
* Detail Enhancement
* Motion Adaptive / Content Adaptive
* Internal/External Reference Input
* Passes embedded audio and metadata
* Bandwidth Reduction (pre-compression)

What this tells me is that they have taken some of the features of their existing products and ADDED additional features to clean up artifacts in feeds that may have been compromised somewhere in delivery path to the station.

Please go back and re-read my original response to this thread:

Looks like this could be a useful product for broadcasters who are trying to re-use feeds that are compressed too heavily. it also looks like they are doing some of the things that are part of H.264.

The problem is, however, that the feeds coming into the broadcaster are not the big problem. It is what they are transmitting that is a (small) problem, and what the cable and DBS companies are doing to trash all kinds of content through the use of overly aggressive compression techniques.

Consider the list of features above:

* Mosquito Noise Reduction - this is an artifact of DCT compression common to both MPEG-2 and JPEG. It results from excessive quantization within a DCT block, which changes the values of pixels with that block. It is an especially nasty aspect of DCT quantization that typically occurs in regions of very high contrast that fall within a block, such as the edge of a text overlay. It also occurs at any sharp edge boundary where there is a rapid change in pixel values. This is probably the most common form of compression artifact, typically occurring before the image breaks down into macroblocking artifacts.

* Random / Gaussian Noise Reduction - this is a problem common to all forms of video imagery. It typically occurs during image acquisition as the result of sampling errors and noise in the processing circuitry. It can also occur virtually everywhere downstream in analog systems based on the noise that may be added by processing circuits and amplifiers.

* Block Artifact Reduction - this is the worst case of quantization noise. In essence, the entire contents of a DCT block and typically the entire macroblock which contains four DCT blocks are all quantized to one common value. The result is an 8x8 or 16x16 block of pixels with the same value. This happens when the DCT quantizer removes all of the high frequency difference values within a block, which typically happens when the encoder is unable to make a good prediction and the predicted value for the block(s) is/are so large that the differences are too large to fit in the available emission bandwidth. This artifact is common at scene changes that do not fall at GOP boundaries, where there are very large differences between frames .

* Detail Enhancement - essentially the use of sharpening filters to reconstruct scene edges. Typically needed if the source has been low pass filtered, or as the result of deblocking filters and mosquito noise reduction, which average pixels in the area that was damaged. Unfortunately, when blocking artifact occur, so much detail is lost that it is nearly impossible to accurately re-create it.

* Motion Adaptive / Content Adaptive - essentially the use of motion compensated prediction to reconstruct detail from adjacent frames. This is a useful technique when there is sufficient detail present to form the basis for predictions. These techniques also suffer from a variety of real-world pathological imagery issues that make high quality prediction very difficult. These issues include reflections off of water and other reflective surfaces; "plastic deformation" - i.e. surfaces that change their reflective properties as they move; and the more generalized problem of object movement that reveals new scene information for which there is little useful information in the past or future to form the basis for the predictions. It is worth noting that MPEG-2 does not attempt to do any form of real motion adaptive prediction - the algorithms are based on block matching with blocks from other frames within a GOP. Newer algorithms such as h.264 have improved prediction tools, but still fall far short of real motion adaptive prediction, because these techniques are so computationally intensive.

* Internal/External Reference Input - NA to this discussion
* Passes embedded audio and metadata - NA to this discussion

* Bandwidth Reduction (pre-compression) - these are typically techniques that remove information from the source to reduce image complexity such as low pass filtering and noise reduction as described above.

So, as I stated in my very first response to this message, this product is designed for broadcasters to deal with incoming source streams that may already have suffered some damage in the contribution pipelines to the facility. It also looks like they have added some features from other products such as noise reduction and low pass filtering to assist in the pre-processing of streams that must be heavily compressed to fit in a station's multiplex.

Unfortunately, as I stated before in this thread, it is likely that the streams in the multiplex will suffer new degradation in the process of being re-encoded for emission.


Sounds to me like the individual HD or SD streams are sent through this
box before being combined in the multiplex transmission?

Look at their openGear frame, on page 6 of their manual.

http://www.algolith.com/fileadmin/user_upload/broadcast/3031-8001UG-200.
pdf

Where does that chassis go, if not in the broadcast station?

DUH. I stated this in my very first response in this thread.


 But post-processing artifact reduction techniques cannot be
 used until the damage is already done.

Okay, I concede that this VNR-1000-HD is only for post-processing, after
the damage is done.

In their original white paper, they wrote:

"Arguably, one might think that it's not such a bad trade-off, but
pre-smoothing is simply an irreversible process. Once they erase the
details, one cannot go about re-creating those details at the other
end."

Thanks for pointing this out. This is the real crux of the matter. When we quantize away real image information and replace it with correlated noise - this is the essence of MPEG-2 compression - we forever lose the original detail of the source imagery.


I took that to mean that this new processor would be used up front, as a
prefilter. Instead, what they apparently mean is that you reduce the
prefiltering, allow SOME MPEG overstress, and then process that out
before creating the multiplex. Nothing else makes sense. This is not
just a product to be installed in a receiver.

Your take is incorrect. ultimately the source must be compressed for emission. It makes no sense to pre-process, compress, decompress, attempt to fix the artifacts, then re-compress again.

This product is a tool for broadcasters to help mitigate the problems caused by compression in contribution feeds, and to pre-process the resulting streams to reduced encoder stress as they go through the process of compression and statistical multiplexing to form a stations multiplex.

If the content in that multiplex is over compressed, new artifacts will be produced that can only be removed at the receiver. And as the quote above suggests, aggressive pre-filtering may reduce the generation of compression artifacts at the expense of image detail that cannot be recovered.

As a very real world example, the multiplex of our local ABC affiliate carries the CW feed, which in turn is decoded and carried by our local cable system. The CW feed is so heavily pre-filtered that it has less resolution than a poor VHS recording.

Regards
Craig


----------------------------------------------------------------------
You can UNSUBSCRIBE from the OpenDTV list in two ways:

- Using the UNSUBSCRIBE command in your user configuration settings at FreeLists.org - By sending a message to: opendtv-request@xxxxxxxxxxxxx with the word unsubscribe in the subject line.



--
Tom Barry                  trbarry@xxxxxxxxxxx  



----------------------------------------------------------------------
You can UNSUBSCRIBE from the OpenDTV list in two ways:

- Using the UNSUBSCRIBE command in your user configuration settings at FreeLists.org
- By sending a message to: opendtv-request@xxxxxxxxxxxxx with the word 
unsubscribe in the subject line.

Other related posts: