[opendtv] Re: MPs back Ofcom stance on spectrum sale

  • From: Craig Birkmaier <craig@xxxxxxxxx>
  • To: opendtv@xxxxxxxxxxxxx
  • Date: Mon, 28 May 2007 13:32:44 -0400

At 5:19 PM -0400 5/25/07, Manfredi, Albert E wrote:
That is implicit in the same measurement. MTF conveys the bandwidth of
the lens, its capability to capture contrast as a function of spatial
separation. Which is implicit in that luminance and chrominance
bandwidth.

The MTF potential for the format and the MTF of the lens are different. One usually limits the other.

As Tom said, I postulate that nature has infinite detail. Start with a
scene with infinite detail. Capture that scene in the very best way you
can in 1080p and 720p formats, then compress each with the same
algorithm. That is when you will see the 2.25:1 ratio.

If you leave out the compression part I agree. The potential exists for more high frequency details to be captured with the 1080P camera. Even in your hypothetical example, however, the ratio after compression would have nothing to do with with this difference in detail - I've already explained some of the factors that will influence the compression efficiency.

But this theoretical case is not very useful. In the real world MOST of the information that the camera will see (and that the human visual system needs to see without distortions) will be at significantly lower frequencies. The whole point of the MTF analysis is to determine the frequencies that can be delivered with enough contrast to be useful to the human observer. While the 1080P camera can capture more detail, most of this detail will be at low contrast levels, and after compression most of this detail will be GONE.

The best way to deliver this additional detail is to resample to a lower resolution for compression and emission. The areas under the MTF curve improves relative to sampling at a lower resolution; the entropy is reduced, and the encoding overhead is reduced ( it takes a lot of bits just to provide the header information for all of those extra blocks and motion vectors).


Just like digital audio, if you reduce the sampling rate when you
capture an image, you will be throwing away some of the higher frequency
detail, of which we already postulated there was an infinite amount. The
reason you must throw away that detail is that you must go through an
initial low-pass filtering stage in the A/D conversion, to prevent
aliasing. And that initial low-pass filtering process is tuned to the
sampling rate you plan to use in the next step.

It should be tuned to sampling rate you are using, not the next step. The next step is resampling to improve the MTF and to lower the entropy.


So once again: start with the same scene, with more detail in it than
either format can do justice to. THEN you will understand why your
entropy arguments cancel out.

Sorry. But this is where you are driving into the ditch. There will be significantly more entropy in the 1080P image because you are NOT oversampling, and you are reducing the sensitivity of the camera. You can eliminate much of this by low pass filtering, however, this throws away MOST of the extra details that the format can theoretically provide. Today's 1080 line cameras are typically starting to roll off their response just above 20 MHz, and capture NOTHING above 24-25 MHz. The only way to improve this is to oversample relative to 1080 lines.

The ratio is key, if you understand what's going on. Because it tells
you what it takes to be able to transmit 1080/60p in a channel barely
capable of coping with 720/60p.

NO it does not. The distribution of frequencies in the real world is what is important. All we are talking about is raising the highest frequency that can be sampled, and the level of contrast that can be achieved for the higher frequencies. In the real world the actual amount of additional information will vary wildly based on the scene content; and when the high frequency details increase, something has to give as you'll need more bits to encode these details.

Perhaps we can turn another argument to this purpose. Mark noted that we need some noise to sample properly at any given bit depth. We could deliver 1080P at lower bit depths, IF we could introduce noise in all the right places as we reduce the information content. This is what the compression algorithm is trying to do. Unfortunately it does not do it very well, and it does it VERY POORLY when we quantize too much.

I'm growing tired of vague words used to dispute quantitative concepts.
You can't determine (or dispute) what it takes to transmit 1080/60p with
vague words, Craig. Just as you can't dispute what is involved in
designing SFNs by using vague words.

Nor can you understand this subject by trying to use overly simplistic quantitative analysis that barely touches on the real factors that impact the bit rate requirements for good quality compression.

Also, there is no excuse for rudeness. None. You will note, the rudest
posts are ALWAYS wrong on this list. This has been true from day 1.
Rudeness does not mask cluelessness. Quite the opposite.


I'll give you some extra credit for not resorting to rudeness. But you've mastered cluelessness, as has been demonstrated again and again over the years, and in this thread, which I am now going to give up on.

Regards
Craig


----------------------------------------------------------------------
You can UNSUBSCRIBE from the OpenDTV list in two ways:

- Using the UNSUBSCRIBE command in your user configuration settings at FreeLists.org
- By sending a message to: opendtv-request@xxxxxxxxxxxxx with the word 
unsubscribe in the subject line.

Other related posts: