[opendtv] Re: Video compression artifacts and MPEG noise reduction

  • From: Craig Birkmaier <craig@xxxxxxxxx>
  • To: opendtv@xxxxxxxxxxxxx
  • Date: Fri, 13 Jun 2008 14:46:29 -0400

Looks like this could be a useful product for broadcasters who are trying to re-use fees that are compressed too heavily. it also looks like they are doing some of the things that are part of H.264.


The problem is, however, that the feeds coming into the broadcaster are not the big problem. It is what they are transmitting that is a (small) problem, and what the cable and DBS companies are doing to trash all kinds of content through the use of overly aggressive compression techniques.

It's hard to imagine consumers putting such a processor on their TVs.

Much easier to think that they will migrate to services that offer better picture quality.

Regards
Craig


At 11:28 AM -0400 6/12/08, Manfredi, Albert E wrote:
Anyone seen these Algolith products?

According to their web site, the processing can be applied to the transmit side, before MPEG encoding, and receive end, as a post-processor after MPEG decoding. If it works as advertized, it would be a good way to extend the life of MPEG-2 compression (obviously a Good Thing when dealing with an established standard).

http://www.algolith.com/products/broadcast/index.html

Here's a description of their HD noise reduction product:

"The Algogear(tm) VNR-1000-HD Dual Video Noise Reducer utilizes advanced noise reduction technology to remove both pre-processing and post-processing DCT-based compression artifacts such as mosquito noise and block artifacts, as well as various other types of random noise that result from analog to digital conversion, recording equipment or low-light captures. In addition, detail enhancement processing allows for enhanced detail without adding halos or ringing.

"The VNR-1000-HD dramatically increases picture quality and/or reduces bandwidth requirements for delivering content through cable, satellite, terrestrial networks, and IPTV delivery platforms. In addition, it allows for two channels to be processed simultaneously. This provides broadcasters with more channel processing density than any other solution available on the market today and significantly reduces their cost per channel."

Bert

---------------------------------------
http://www.videsignline.com/howto/180207350;jsessionid=QJG5WLN4TRGA4QSNDLOSKHSCJUNN2JVN

February 24, 2006

Video compression artifacts and MPEG noise reduction

By Phuc-Tue Le Dinh
and Jacques Patry, Algolith

In theory, DTV's picture quality is superior to traditional Analog TV: no more "ghosting", "snow", "judder", "Never The Same Color", etc. Nevertheless, analog signals' arguably most glaring weakness is its blurriness and lack of fine image details due to shortcomings in high frequency response, or simply put, in bandwidth. The more detailed an image is or the more resolution it has, the more bandwidth it needs.

Long ago, it was agreed upon that 6 megahertz (MHz) of bandwidth from the allowed spectrum would be allocated to each channel for broadcasters in the U.S. to provide these analog TV signals. This limitation in video bandwidth and its corresponding standard (NTSC) in turn dictated the specifications of the traditional TV set, as well as its picture quality for decades.

With the advent of DTV, broadcasters saw a great opportunity to make much better use of their bandwidth. Indeed, from their standpoint, one of the sterling advantages of DTV was that it allowed for multiple channels in the same amount of bandwidth, and would later on allow for High-Definition Programming (HDTV).

Too many bits

HDTV means a surge in technical requirements. A conventional NTSC signal has 525 lines scanned at 29.97Hz for a 4.2MHz minimum bandwidth to carry the analog video off of a 6MHz channel. When digitized and compressed, this signal can be recorded on a DVD and its bit rate varies from 2 to 10 Mbits/s (adaptive) with an average of 4 Mbits/s. For comparison, a typical HDTV feed has roughly 5 times the resolution. All things being equal, the transmitted bit rate should be around 5 times more important to deliver similar performance.

Whether it's the traditional over-the-air (OTA) broadcast, the cable company's set-top box or the satellite-TV provider, they all have a limited amount of bandwidth to send all these feeds, to which, they add other bandwidthintensive services such as interactive broadcasts, subscription channels, TV schedules, etc.

So what is the solution? Compression.

Digital Video Compression Artifacts

The most commonly used method today to compress digital video data is MPEG-2. From current satellite streams and digital cable feeds to off-the-air digital broadcasts, MPEG-2 has now been internationally adopted for a variety of applications.

MPEG-2 first exploits temporal redundancy through motion estimation and then proceeds to spatially subdivide the image in 8x8 blocks upon which the DCT (Discrete Cosine Transform) is applied to exploit spatial redundancy. Compression is done by quantizing resulting DCT coefficients and re-ordering them to maximize the probability of long runs of zeroes, and then run-length coded. Finally a Huffman encoding scheme is used. The whole process allows for great savings in terms of bit-rate ratio (>10:1).

However, these savings don't come free, and because the codec discards some of the original video information, there can be serious side-effects; MPEG-2 is what we call a lossy codec. It discards image information believed to be of lesser visual importance. The more you want to compress, the further away you get from the look of the original image. Image quality and fidelity now depends on the chosen (or often imposed) level of compression. And since that is directly tied to the available bandwidth, we must ask ourselves when is the video simply too compressed?

Visible Artifacts

Bandwidth restrictions in the digital domain, combined with an aggressive image compression scheme, will manifest themselves differently than in the analog world.

Usually, the analog degradation (or noise) will more often than not follow a Gaussian distribution. This distribution's advantage is that it will preserve essential content and mimic our eyes' drop-off. We usually find a constrained analog image a bit fuzzy but nothing clearly objectionable.

Digital noise follows a different distribution pattern and, more importantly, has a particular shape that the human perception finds unnatural. There are mainly two artifacts who present the latter characteristic when pushing the limits of MPEG-2 (or any DCT block-based codec): Mosquito noise and Blocking artifacts.

Mosquito noise, a.k.a. Gibbs effect

Mosquito noise is most apparent around artificial or CG (Computer Generated) objects or scrolling credits (lettering) on a plain coloured background. It appears as some haziness and/or shimmering around high-frequency content (sharp transitions between foreground entities and the background or hard edges) and can sometimes be mistaken for ringing Unfortunately, this peppered effect is also visible around more natural shapes like a human body. The VIRIS project (a Video Reference Impairment System) defines mosquito noise as follows: "Form of edge busyness distortion sometimes associated with movement, characterized by moving artifacts and/or blotchy noise patterns superimposed over the objects (resembling mosquito flying around a person's head and shoulders)."

It occurs when reconstructing the image and approximating discarded data by inversing the transform model (iDCT).

"Mosquitoes" can also be found in other areas of an image. For instance, the presence of a very distinct texture or film grain at compression will also introduce mosquito noise. The result will be somewhat similar to random noise; the mosquitoes will seem to blend with the texture or the film grain and can look like original features of the picture.

Blocking artifacts

Blocking artifacts, as the name suggests, manifest themselves as objectionable and unnatural blocks within an image. Sometimes referred to as macro-blocking, it is a picture distortion characterized by the underlying block encoding structure becoming visible.

When pushing the limits of the encoder, the blocks are rather roughly averaged, making them appear as one big pixel. From block to block, the average calculated can vary, and thus creates these well defined borders between blocks.

This effect becomes even more pronounced when there's some fast motion or quick camera movement. Probably the best example for this is during NFL telecasts, where the player carrying the football can quickly turn into some form of low-res, blocky, pixelated Mario Bros. look-alike from the old Nintendo days.

Pre-Smoothing

Although not part of the compression artifact family, pre-smoothing has worked its way onto this short list of conspicuous digital annoyances.

Broadcasters and content providers alike have become increasingly aware of their distribution system's shortcomings. Some of them came up with a rather disputable solution to their bandwidth restrictions: pre-smoothing.

By removing high-frequency content in the picture before putting it through their transmission chain, the encoder has a much easier time to do its job, and the resulting images are less subject to blocking artifacts and mosquito noise. On the other hand, this sometimes excessive filtering also gets rid of all the subtle details and textures of the original image. The football player with his 1- week playoff beard is now cleanly shaved (even when he's not moving) and the Astro-turf turns into a big green mass of Play-doh...

Arguably, one might think that it's not such a bad trade-off, but pre-smoothing is simply an irreversible process. Once they erase the details, one cannot go about re-creating those details at the other end.

Mosquito noise and blocking artifacts, however, can be addressed.

MNR: Algolith's Solution

From an academic standpoint, compression artifacts and their correction have been extensively studied, but until now, there haven't been many tangible solutions for the end-user.

Algolith is one of the first to provide a proprietary real-time solution to mosquito noise and blocking artifacts: Algolith's MNR - MPEG Noise Reducer.

Algolith's MNR is a highly automated, autonomous, efficient and non-iterative technique for post-processing DCT compressed images and is the fruit of more than 10 years of algorithmic research and development.

MNR implements four distinct image processing techniques:

1 - Per-pixel temporal recursive noise reduction,

2 - Mosquito noise reduction using sophisticated segmentation techniques,

3 - BAR - Block Artifact Reduction by detecting, blending and diminishing inherent block structures of DCT block-based compression,

4 - Multiple user adjustable image enhancement options using nonlinear filtering.

The essence of the MNR resides in its spatial image analysis module. Each pixel is classified into specific regions of interest; edge, texture, flat and artifact areas are all distinguished. It also looks into the temporal domain to discern motion areas of the picture. Combining all this information determines one of many different filters to be applied.

MNR's spatial image analysis

Adaptability sets MNR apart as an advanced image processing system. The MNR is able to address very specific problem areas without hindering the rest of the image. It is probably as important to know when and where to filter as it is to know when and where not to filter. It's with this motto in mind that the MNR was designed. As a result, MNR only enhances the viewing experience and has proven to be a much needed add-on for large screen displays and projection screens. Its highly adaptive nature also allows for improved picture quality without disrupting the established broadcasting infrastructure. And since MNR has always been designed with real-time realization and hardware feasibility in mind, it can be seamlessly inserted into an end-user's existing home-theater equipment.

The Progression of Displays

It used to be that the standard NTSC would dictate the specifications of the display. Analog TVs always had the same resolution for ages with only marginal improvements in visual quality. Nowadays, the inability from the governing bodies to stick and impose one digital standard, for better or worse, has blown-up the specifications of the generic display.

There's no more unique standard resolution to cater to; and, with the rapid rise of new display technologies (LCOS, DLP, etc.), some of the new monitors can now exceed the maximum resolution of the feeds. More importantly newer displays boast higher contrast ratios and are reaching sizes that were unfathomable only a few years ago.

All of these elements put more stress on picture quality as they act as a magnifying glass on potential artifacts.

As display technology continues to evolve, remedies to faulty source material, such as Algolith's proprietary MNR will become more attractive.

Maximize SD DVD While Waiting for HD Dust To Settle

The transition to the digital world is not just restricted to the broadcasting industry. The old VHS tape is also dying a rather quick death (if it's not dead already) at the hands of the all-digital DVD-Video (Digital Versatile Disc). But even this technology is not safe from the needs to compress.

Indeed, including bonus features, extra footage and multiple soundtracks to an already constrained amount of space can make the picture marred with visual artifacts. As more "value" is added, disk space becomes a premium and a higher compression ratio, the salvation. In turn, consumers complained about the deteriorating visual quality and this has prompted the "Superbit" collection of DVDs, where the focus is on maximizing the space for the movie feature itself.

This move from the DVD publishing industry not only confirms the possible drawbacks of compression, but also highlights the public's awareness to them. Hence, the normal DVD medium would also benefit from further video processing such as Algolith's MNR solution. In this case, MNR would allow current generation media to stay relevant while we wait for the winner to emerge from the upcoming Blu-Ray vs. HD DVD vs. HVD battle.

Future Outlook

The winner(s) of the new format battle aims at delivering HD content with the highest visual quality, something simply not possible with current DVD technology. Furthermore, these new formats, in addition to current codecs, will also include next generation ones:

MPEG-2, enhanced for HD

VC-1, informal draft name of the SMPTE standard 421M, based on Microsoft's Windows Media Video (WMV) technology.

MPEG-4/AVC a.k.a. H.264, arguably the most promising

However, this variety of standards, combined with ongoing licensing issues might result in the same disorder that has plagued DTV from the start. It remains to be seen as to how final image quality will be affected.

The Better Codecs Get, The More They're Pushed

DTV and HDTV were primed to be the Holy Grail in picture quality. Even so, the reality is that we are still far from image nirvana. The need to compress has brought forth several problems of its own and, with ever better display technologies emerging, these issues are becoming more apparent to the average viewer.

The industry, as a whole, is well aware of the harsh facts of limited video bandwidth. Advancement in codec efficiency is showing potential as the next great solution in picture improvement. Yet, as the future seems to be determined to increase its bandwidth needs (IPTV, increased interactivity, specialized content, etc.), one has to wonder if innovations in compression technology alone will ever catch up to the consumer's expectations.

As this gap widens, the need for better video processing algorithms will certainly become another battlefield for a clearer tomorrow.

About the authors

Phuc-Tue Le Dinhearned a B.A.Sc. degree in Electrical Engineering from the cole Polytechnique de Montréal in Canada and is completing his M.A.Sc. in Electrical Engineering. Mr. Le Dinh has worked extensively in video and imaging algorithmic development for various companies in Canada and now provides sales engineer support for major algorithmic contracts and OEM sales at Algolith. He can be reached at pledinh@xxxxxxxxxxxxx

Jacques Patry earned a B.A.Sc. degree in electrical engineering from the University of Sherbrooke in Canada and a M.A.Sc. in Electrical Engineering with a concentration on signal processing from the same University. Prior to Algolith, Mr. Patry worked as a design engineer and algorithmic development engineer for various companies in Canada and the United States. Mr. Patry is currently Product Manager at Algolith and is responsible for product management in the Home Theater, Post-Production and OEM markets. He is part of Algolith's founding team and is currently working on completing an MBA at HEC University (Haute Etude Commercial, Montreal). He can be reached at jpatry@xxxxxxxxxxxx .


----------------------------------------------------------------------
You can UNSUBSCRIBE from the OpenDTV list in two ways:

- Using the UNSUBSCRIBE command in your user configuration settings at FreeLists.org

- By sending a message to: opendtv-request@xxxxxxxxxxxxx with the word unsubscribe in the subject line.



----------------------------------------------------------------------
You can UNSUBSCRIBE from the OpenDTV list in two ways:

- Using the UNSUBSCRIBE command in your user configuration settings at FreeLists.org
- By sending a message to: opendtv-request@xxxxxxxxxxxxx with the word 
unsubscribe in the subject line.

Other related posts: