[opendtv] Re: 720P = 540i ?

  • From: Jeroen Stessen <jeroen.stessen@xxxxxxxxxxx>
  • To: opendtv@xxxxxxxxxxxxx
  • Date: Wed, 4 May 2005 09:12:08 +0200

Hello, 

John Golitsis: 
> Jeroen, do you mind if I repost this information to the Digital Home 
> Canada forums?  The same article is being discussed there, but there 
> are no technical experts on the forums.  Of course, that doesn't stop 
> some from pretending!

I don't mind, but first I should make some corrections. 

Tom Barry: 
> I don't suppose there exist any very very simple links on how that 
> process would work?  I admit I'm skeptical.

None that I know of, but there should be some mention of it in 
Gerard de Haan's book on multimedia video processing. If only 
I could find my copy... Anyway, the process of MC de-interlacing 
is quite simple: you have your odd lines from the current input 
field and you need to fill in the even lines from history. 
You still know the result for the previous period, so you have 
a previous frame. Now if you can move (re-sample) that frame to 
the current motion phase then you can simply pick the even lines 
from the previous frame and put them into the current frame. 
This is obviously a recursive process. It works perfectly if the 
motion is 2N (incl. 0) lines per period, because then you'll be 
using only fresh lines from the previous field (and there is 
essentially no recursion, just weaving). It fails if the motion 
is (2N+1) lines per period, because then you'll be trying to 
repeat old information forever. For all other motion speeds 
you'll use more or less "stale" information. 

This process depends heavily on the accuracy of the motion vectors, 
down to a fraction of a pixel. But as the estimation of motion 
itself is disturbed by interlacing artefacts, this can be tricky. 
We are seeing other methods coming up, that do not depend on any 
motion vectors. Faroudja's DCDi was probably the first one. 
I think it means "Directionally Correlated De-interlacing", or 
something like that. It is based on the principle that lines are 
generally not perfectly horizontal, but slightly diagonal. And 
interlacing artefacts are most annoying on (slightly) diagonal 
lines. Instead of getting the missing information from previous 
fields, you can also get it from neighboring pixels in the same 
field. If you can estimate the angle of diagonal lines, then you 
can fill in the blanks too. Essentially you are getting vertical 
information from the horizontal neighbors at different vertical 
positions, instead of the temporal neighbors. 

Such intra-field method seems to be easier, cheaper, and more 
robust than some of the multi-field methods. This is the 
direction (pun intended) that de-interlacing is taking now. 

Tom: 
> Recently I've been becoming gradually convinced that in order to 
> completely remove aliasing artifacts from, say, 1080i you would 
> have to vertically filter to only 540 vertical pixels of 
> resolution. 

You mean: filtering at the transmitter side, right ? Because 
at the receiver side you can only filter (scale, etc.) AFTER 
having done the de-interlacing. That wouldn't make sense. 

> But if people were really doing that then I'm not sure there 
> would be ANY additional information available from looking at 
> adjacent fields, no matter how good or powerful your motion 
> compensation logic was.  And that would even be true for 
> still scenes.

Correct, though you could switch off the filtering for still 
scenes. Actually, according to Prin, proper pre-filtering of 
interlaced signals requires a vertical-temporal filter, which 
will therefore behave differently for moving images than for 
still images. 

But, the Generalized Sampling Theorem, like any Super Resolution 
scheme, depends on there being aliasing (in this case vertical-
temporal aliasing) in the individual fields. If there were no 
aliasing then all fields would indeed carry the same information, 
and there can be no gain from combining information from 
different fields (apart from noise reduction). De-interlacing 
must attempt to resolve the aliases and come up with the original 
image. On an interlaced display the tracking eye will do this. 
On a progressive display (or if we need to apply vertical scaling 
or frame rate conversion) we must do it with algorithms. 

Indeed a 1080i raster should be more than just two 540p rasters 
with a different scanning phase. The intra-field vertical aliasing 
is an essential advantage in order to get more information through. 
Under ideal conditions 1080i can carry the same information as a 
1080p raster, but this can still be disturbed by vertical-temporal 
aliasing. With a line-on line-off pattern (which is illegal anyway 
because it is too close to the Nyquist limit) you'll never know 
whether it was intended as a vertically detailed pattern, or as a 
30 Hz blinking frame. That is the essential meaning of "aliasing", 
that you can not know for certain which one it must be. Luckily 
these cases are rare enough that in practice interlacing works. 
(Why send the same information twice - 540p - if you can send 
 additional information - 1080i - instead, I like to say.) 

Greetings, 
-- Jeroen 

+-------------------------------+------------------------------------------+
| From:     Jeroen H. Stessen   | E-mail:  Jeroen.Stessen@xxxxxxxxxxx |
| Building: SFJ-5.22 Eindhoven  | Deptmt.: Philips Applied Technologies |
| Phone:    ++31.40.2732739     | Visiting & mail address: Glaslaan 2 |
| Mobile:   ++31.6.44680021     | NL 5616 LW Eindhoven, the Netherlands |
| Pager:    ++31.6.65133818     | Website: http://www.apptech.philips.com/ 
|
+-------------------------------+------------------------------------------+


 
 
----------------------------------------------------------------------
You can UNSUBSCRIBE from the OpenDTV list in two ways:

- Using the UNSUBSCRIBE command in your user configuration settings at 
FreeLists.org 

- By sending a message to: opendtv-request@xxxxxxxxxxxxx with the word 
unsubscribe in the subject line.

Other related posts: