[opendtv] Re: DTT in the US

  • From: Craig Birkmaier <craig@xxxxxxxxx>
  • To: opendtv@xxxxxxxxxxxxx
  • Date: Sat, 14 Jan 2006 10:13:06 -0500

At 7:13 PM +0000 1/13/06, John Willkie wrote:
>John "Willkie" (note spelling, and I would think that someone with 
>as difficult to spell a last name as yours would be sensitive to 
>correctly spelling names) maintains that "no cable company in the 
>United States has ever made a profit from operations ever."

With all due regards Mr. "Willkie"...

My apologies for the misspelling.

>And, this John Willkie doesn't ever confuse operating revenues from 
>actual profits.  Exxon this year made more profit in a quarter than 
>any company in history has ever made.  To even put that in a posting 
>where you try to equate profit with "operating revenues" (which, by 
>the way, are not even the same terms vis a vis cable and oil 
>companies) is ... breath-taking.

I am well aware of the difference between operating revenue and net 
profits. After I posted the message I realized that I was citing 
operating revenues, not net profits for cable. My bad. All of the 
other stats cited net profits.

But it is important to recognize that the cable industry had very 
significant operating revenues. The reasons for this is that they 
make a ton of money off of their operating cash flow. Like the 
airline industry they have many ways to hide the profits. In the 
airline industry a huge portion of the operating revenues are paid 
out to the people who own the leases on the planes. The cable 
industry grew on the back of "junk bonds," with most of the operating 
profits in the '80s and '90s going to pay off these bonds - in other 
words, most of the "profits" were taken by the bond holders, not the 
cable MSOs. In recent years the industry has invested more than $60 
billion on their digital upgrades, so a big portion of the operating 
revenues are now being used to pay off these upgrades.

>This is a good time to point to another obvious syntactical error of 
>yours.  In you Broadcast Engineering column of two months ago, you 
>say that the code in firmware and the code in software is "all 
>software."  It's a breathtakingly ignorant assertion.  Software 
>means that it can be changed after deployment, and firmware cannot 
>be changed, although the circuits housing the firmware can be 
>replaced.  If the ic vendor is still in busines and if they still 
>have a business relationship with the equipment vendor.

Better go back and read that column again. it was comparing software 
video encoders and hardware encoders - specifically for broadcast 
MPEG-2 applications, but also for non-real time encoding applications.

I never used the word "firmware" in that column.


  I am not surprised at your breathtakingly ignorant assertion, as you 
clearly missed the entire point of the column. All video compression 
algorithms are software. Almost all of them define the syntax of an 
encoded bitstream and the requirements for a decoder to turn the 
encoded bits back into pictures. These algorithms do not define the 
operation of an encoder, however, they DO define the bits that that 
encoder must produce. The reality is that all encoders, whether 
software or hardware, do essentially the same things, by using the 
algorithmic routines necessary to produce a compliant bitstream.

As i pointed out in the column, the major difference is that hardware 
encoders typically must operate under real-time constraints, and thus 
may not be able to run some of these algorithmic routines to their 
"best" conclusion. This is particularly true for the block matching 
routines that form the basis for the "motion compensated prediction" 
used to give MPEG-2 and other inter-frame compression algorithms the 
extra compression efficiency needed for emission encoding 


It is very true that most hardware encoders embed these algorithmic 
routines into hardware implementations. In some cases this takes the 
form of dedicated ASICs that are hardwired, and can never be updated. 
In some cases the algorithms are embedded in programmable logic, 
which CAN be updated, as can many forms of firmware, like the iPOD 
update i just received from Apple. On the encoding side, the most 
successful encoding platforms (in terms of deployments) are build 
atop programmable chips that allow for field upgrades as the 
companies develop improved implementations of the algorithms.

If you are more comfortable with the notion that writing a software 
application that runs on a generic CPU (with help from the dedicated 
hardware in most GPUs today), is somehow different than writing code 
for a programmable application specific processor, then I guess I am 
breathtakingly ignorant.

As you say, It is all code.

>It shows you didn't understand the difference from 20 years ago, and 
>the changes keep on coming, since the current state of the art is 
>that "software" can be changed by the application at run-time (well 
>after the code has been written) based on experience.  Try that 
>trick with firmware sometimes.  And, just in case some nit-picker is 
>emboldened: some software requires a run-time component, and some of 
>those run-time components are implemented in firmware.

You are dancing on the head of a pin here John. The reality is that 
both hardware and software encoders typically run on a combination of 
programmable chips and chips in which the code is embedded in 
hardware and/or firmware. If a routine like the DCT/Reverse DCT can 
be implemented in hardware and never changes, then both software and 
hardware encoders may call on those hardware routines for execution, 
even in other parts of the implementation can be changed via software 
or firmware upgrades.

None of this changes the point of the article. In almost all cases 
you can do a better job of encoding using non-real-time techniques, 
even if much of the actual encoding work is done with hardware. This 
is the rationale behind the use of compressionists for DVD and other 
forms of content authoring where you want the best possible quality 
for the smallest number of bits.

The article points out the fact that real-time encoding often 
involves compromises because the prediction routines cannot be run to 
their proper conclusion. Thus you can take a program, for which the 
producer spent the time to optimize the compression, and trash it by 
decoding before the master control switcher, then re-encoding it for 
emission with a real-time encoder.

I went to all of this effort to point out that in the emerging world 
of video downloads, real-time is largely meaningless. COntent 
producers are likely to take the time to optimize the quality of 
their product, and they will rightfully expect that these bits will 
be delivered unaltered, just like the binaries we download for 
software applications.

>I could go on, talking about the inherent differences between the 
>Java Virtual Machine (model T) and the .Net framework 2.0 (Cadillac 
>Seville) and how MS uses the framework to extend their reach into 
>new areas, but I'll save that for later.

If you did go on it would be irrelevant to this discussion. But it 
would make for a very interesting new thread. This particular issue 
lies at the heart of the current squabble about next generation High 
Definition DVD formats.

DVDs are just bit buckets. The software that determines what a 
consumer can do with these new disks is the real battle line here. 
Microsoft is focused like a laser beam on binding DVDs with their 
Internet back end. They will do anything to prevent Java from winning 
this battle, including dipping into their deep pockets to subsidize 
HD-DVD, to give it a big competitive advantage over Blu-Ray and Java.

You can UNSUBSCRIBE from the OpenDTV list in two ways:

- Using the UNSUBSCRIBE command in your user configuration settings at 

- By sending a message to: opendtv-request@xxxxxxxxxxxxx with the word 
unsubscribe in the subject line.

Other related posts: