[opendtv] Re: FCC Gives Official Nod to DTS

  • From: "Manfredi, Albert E" <albert.e.manfredi@xxxxxxxxxx>
  • To: <opendtv@xxxxxxxxxxxxx>
  • Date: Fri, 5 Dec 2008 20:12:25 -0500

Tom Barry wrote:

> You speak of the limits of echo tolerance in receivers. Can
> anyone explain what the physics or economic limitations are
> that makes this such a problem? Is there a simple reason why
> we can't make cheap receivers with huge echo tolerance?

I'll try.

First of all, I think you correctly imply here that the problem with
SFNs is that they create what looks like echoes to receivers. So SFNs
only work to the extent that these deliberately-created echoes don't
overwhelm the receivers. It's not as simple as the receiver just
accepting the signal from one tower, and ignoring the rest. (Which is
how cell phones work, and in principle so could broadcast receivers, but
that's another discussion.)

If you transmit a series of symbols, RF blobs of energy, with no spacing
between them, this creates as efficient a channel as you can get. The
max possible number of symbols per unit time.

What happens when an echo occurs? The energy from one symbol is extended
in time. Usually, this doesn't happen by the same amount across the
spectrum occupied by the symbol, but let's keep this simple. The symbol
essentially spreads itself beyond its allotted time interval.

But we said there was no spacing between synbols. So energy from one
symbol distorts the next one coming along, and perhaps many others too.
Demodulating a distorted symbol creates bit errors.

So to prevent this interference between symbols, let's insist we want to
introduce a gap between each symbol. So that whatever likely echo
duration there is will never spill the energy from one symbol on the
next ones. And let's say we have measured echoes to last many 10s of
usec. How can you introduce a gap between symbols that lasts multiple
10s of usec, without making the RF link incredibly inefficient?

You need to make each symbol last a really long time. Take
single-carrier 64-QAM. Each symbol carries 6 bits-equivalent of
information. To achieve a raw 30 Mb/s (i.e. before FEC), you need to
transmit 5 million symbols/sec. Which means each symbol must last 0.2
usec and no more. Way less time than the expected duration of the echo.
We said echo was in the multiple 10s of usec. So obviously, any gap time
between symbols that has to last 10s of usec will make that RF link
ridiculously inefficient. Just one symbol in a blue moon. So how can you
introduce a nice gap time, to achieve echo tolerance, but without
creating such inefficiency?

Instead of transmitting that 30 Mb/s in a single stream of symbols, what
if you transmit many thousand of parallel streams? Now the symbols from
each stream can be made to be very slow. They can last even 100s of
usec. NOW a gap time in the 10s of usec won't seem all that wasteful. As
a percentage of symbol duration, the gap time can be fairly low.

Of course, these slow symbols won't carry many b/s. But adding together
the data rate offered by each of those very slow symbol streams will
come close to the data rate of the single symbol stream with no gap

That's COFDM. The echo tolerance is only as huge as the gap time that
can reasonably be introduced.

DVB-T has 6700-odd parallel streams in 8K mode. The gap time for the
small area SFN in Berlin was set to 1/8, so that starts eating into
spectral efficiency, but not too bad yet. On the other hand,, SFN towers
have to be fairly close together, even at 1/8 GI. DVB-T2 goes up to 30K
mode, which really makes each symbol long duration. Making even longer
gap times possible without seriously degrading spectral efficiency. (On
the other hand, how well dynamic echo is tolerated when symbols become
very long is another question.)

ATSC does not go this cheap route of gap times, for echo tolerance.
Instead, it uses an equalizer, like those fancy multi-adjustment tone
controls that were popular in the '70s. The equalizer twists the symbols
back into shape, as they travel through it. What you're doing is taking
all that spillage, in different parts of the spectrum occupied by the
symbol stream, and shoving it back where it belongs, in time.

I think it's intuitively obvious that an equalizer capable of correcting
symbols in which energy is spilling over for a very long time, over very
many symbols, is going to be more expensive to do than what COFDM has to
deal with. It takes more of those sliders, to get way out there in time,
and the equalizer has to be made not to add a lot of noise. All COFDM
had to do is rely on a gap time.

If you can successfully design such an equalizer, the advantage is that
you have not relied at all on gap times. Which means you have gotten
good spectral efficiency. So, DVB-T2 takes both approaches at the same
time. Gap times and equalizers, to look for a better balance.

There are other issues to be considered, like initially achieving symbol
sync. Suffice it to say that DVB-T took the clever, low cost route
(primarily active pilots), but that did not come for free. So now,
DVB-T2 is making things more efficient. ATSC primarily looks for
well-known symbol patterns, and the McDonald/Limberg/Patel came up with
additional clever ways to solve this sync problem.

And also, once the more complicated solutions have been made in silicon,
and produced in large numbers, the costs become "who cares." Just like,
say, the faster modes of Ethernet on twisted pair copper. They too are
very complex, but does anyone care after awhile?

You can UNSUBSCRIBE from the OpenDTV list in two ways:

- Using the UNSUBSCRIBE command in your user configuration settings at 

- By sending a message to: opendtv-request@xxxxxxxxxxxxx with the word 
unsubscribe in the subject line.

Other related posts: