[opendtv] Re: MPs back Ofcom stance on spectrum sale

  • From: Craig Birkmaier <craig@xxxxxxxxx>
  • To: opendtv@xxxxxxxxxxxxx
  • Date: Thu, 24 May 2007 12:01:57 -0400

At 11:51 AM -0400 5/23/07, Manfredi, Albert E wrote:
Let me try again. If you calculate the maximum possible amount of
*information* that can be transferred to a display in 720 at 60p and in
1080 at 60p, you will find that the maximum possible is 2.25X greater in
1080 at 60p than it is in 720 at 60p.

Sorry Bert, but you still do not get it.
You are talking about the number of samples that are transferred to the display over some unit of time. This has NOTHING to do with the information content of those samples.

Here are two simple examples that may help with your lack of comprehension of the concept.

Let's say we want to create a solid color field on a display. Regardless of the number of samples that will be displayed, we need only a three bytes of data to represent any color field with 8 bit samples (you can call this RGB or YUV, whatever). We simply say that all samples are the same.

With MPEG we would need more data to represent the color field, as we would need to send the same "information" for each 8X8 DCT block, but this would still be a very small amount of information.

At the other extreme we could try to encode a random noise pattern where there is a strong probability that we will have a number of sample values within the DCT blocks. Since there is NO CORRELATION between these samples it is virtually impossible to compress such a signal accurately.

A more realistic example would be a very fine texture - perhaps shooting a coarse tweed jacket. Each DCT block would contain a wide range of samples and a number of different high frequency components. Any attempts to quantize such a block would result in a significant loss of the high frequency detail. A lower resolution version of this would contain less detail, thus less information. To make matters worse, with a 1080@60P camera we would capture more information, but with more entropy - i.e. noise would cause distortions in the samples which would actually add to the encoding overhead. IF we resampled this image to 720P we would reduce the entropy (sampling errors) and the information content (high frequency details). Thus this image would compress with significantly improved efficiency.


This means that if you use the same compression algorithm in each
example, the relationship of b/s between the two will be 2.25X, in the
*very worst* case.

NO. THis is only the relationship of the number of samples. It has NOTHING to do with the information content.


So you don't need to wrap yourself around an axle with considerations
about image content. The number 2.25X assumes that the 720p and 1080p
images EACH contain as much detail as they possibly can. Each is used to
its full extent.

In this extreme case, both would be impossible to compress in a 209 Mbps emission channel. Fortunately, we usually have a great deal of correlation in the samples that we can exploit with a compression algorithm. As I mentioned in a previous post in this thread, Higher resolution images MAY have better correlation of the samples (although noise is an enemy here). And we are providing more samples for the same image detail than a lower resolution image, thus the amount of information in each block may be reduced if the information is at lower frequencies - then again it could also be increased, as the filtering of a lower resolution image may eliminate fine details.

Once again the number of samples in each image has little to do with the information content.

Think of this as TOTAL INFORMATION CONTENT, which is then compressed
down to something much smaller by the compression algorithm. If you use
exactly the same compression algorithm in each case, it will compress
down by the same factor. Again, both images contain as much detail as
they possibly can, in their respective formats.

NOT TRUE. IT will depend on the amount of information and the amount of entropy in each. IF the lower resolution image is downsampled from the higher there will be less information and less entropy.

I said that when/as/if a compression algorithm comes along that provides
a true 2:1, or slightly better improvement, compared with H.262, THEN
the transmission of 1080p will be feasible in the existing RF channels.

This would ONLY be true if the information content and entropy did not increase with the higher resolution format. Just the opposite is likely to be true.

 > 720P will compress more efficiently UNLESS the information
 content of the 1080@60P source is equal to or less than the
 information content of the 720P source. You can compress both
 to the same bit rate, but you will throw away most of the
 higher frequency detail in the 1080@60P source, which cancels
 out the ONLY reason why we would want to deliver 1080@60P in
 the first place.

SO WHAT? This assumes the same compression algorithm is used in both
cases. What you said there is completely obvious, Craig. Now change the
compression algorithm used in the 1080p case to something much better
than what you used in 720p, and what you said above will no longer hold.

SO WHAT? Why would you bother to try and send a higher resolution format through the channel if you got the same quality or less than a lower resolution format that compressed more efficiently, and/or provides more overhead to deal with the peak bit rate requirements that will cause more stress (artifacts) in the higher resolution encoding.

THIS WAS THE ENTIRE POINT OF THE EBU DEMO. Somehow you completely missed it.


 It seems to me that the PROPER place to place our attention in
 terms of improving the overall quality of DTV should be at the
 lower end of the spectrum - to improve the delivered quality of
 the SD content that represents the vast majority of what is
 delivered by broadcasters, cable and DBS today.

Legacy thinking. Just like we should only be concerned with improving on
the sound of 78 RPM records.

Sorry Bert but this is not legacy thinking The reality is that the vast majority of content that is being delivered today and will be delivered in the next decade will NOT be HDTV.

To be honest, it seems that the distributors could care less about delivered image quality at ANY resolution. They are squeezing it as hard as they can until people stop paying for the service.

Furthermore, the level of resolution should match the application requirement. We do not need HDTV resolution for MANY applications. What we need is good quality samples regardless of the resolution of the format. Good quality images can be scaled for any display. Artifacts just get magnified when we scale up.

Regards
Craig


----------------------------------------------------------------------
You can UNSUBSCRIBE from the OpenDTV list in two ways:

- Using the UNSUBSCRIBE command in your user configuration settings at FreeLists.org
- By sending a message to: opendtv-request@xxxxxxxxxxxxx with the word 
unsubscribe in the subject line.

Other related posts: