[opendtv] Re: Common encoding formats

  • From: Kilroy Hughes <Kilroy.Hughes@xxxxxxxxxxxxx>
  • To: "opendtv@xxxxxxxxxxxxx" <opendtv@xxxxxxxxxxxxx>
  • Date: Sat, 26 Jul 2014 00:06:20 +0000

Saw this on an infrequent visit, and it doesn't look like you got any responses.

My answer:  Depends on the workflow.

*        For last mile encoding, most of the files and streams we get are 
already 4:2:0 8-bit, so we have to avoid any re-quantization, colorspace 
conversion, etc. to avoid visible banding (usually Internet adaptive streaming 
or download).  We don't use interlaced source because of poor quality and poor 
compression (spatial/temporal cross products).

*        Upstream in house production and editing is hopefully 4:4:4 10-bit.

*        For interchange, that is usually mezzanine compressed, such as ProRes 
or Cineform (VC-5).

*        Contribution formats are unpredictable.  Might be a project format 
(e.g. ProRes), or a delivery format used for broadcast, DVD, BD, etc.
I'm only considering files and streams, because I consider tape dead; even for 
archive (start the ten year conversion now so you can finish before all the 
spare parts are gone).  Plus, it is really hard to ship tapes to the cloud.

I'm anticipating live ingest and last mile moving to 10+-bit 4:2:2 for 4K and 
HDR.  8-bit looks like shit at 4K, and HDR gamut and dynamic range make it more 
obvious.
The HEVC codec treats 10 and 12 bit as a first class citizen, and I expect the 
infrastructure to support 10+ bit from the start.  AVC was designed and 
implemented at 8-bit, so higher precision usually isn't supported at some point 
in a workflow.
It's possible that HEVC will be both a mezzanine and delivery codec just by 
turning the bitrate knob.

The video production and delivery model needs to be entirely overhauled for 
HEVC/UHD generation.

Capture should be at the limits of human perception ... including spatial and 
temporal sampling, color space, dynamic range, gamma, precision, etc.  It's 
useful to store the capture conditions and render intent to help intermediate 
and endpoint rendering in reduced range, gamut, gamma, resolution, etc. ("color 
correction").

Today each device must approximate the producer's render intent with different 
screen aspect ratio, display technology, refresh rate, brightness, dynamic 
range, viewing condition (e.g. dark home theater or iPad in the sunlight), etc. 
 This has been true for REC 709 content that was color graded for a studio 
monitor in a studio, but is now officially broken with HDR content that isn't 
squashed to fit on existing interfaces and devices.  No matter how old you are, 
It is now officially impossible to frame and color-correct video like it was 
all going to be displayed on a 27" CRT in a living room.   Only the display 
device knows it display capabilities and viewing environment.  Frame rate 
sampling and display refresh, spatial sampling ("resolution") and 
scaling/cropping/padding/framing were converted to a device function, not 
studio function, with the rise of internet video.  Adaptive streaming makes 
spatial subsampling and bitrate device controlled.  Now control of endpoint 
precision, color volume scaling, etc. need to be transferred to devices.

Kilroy Hughes | Senior Digital Media Architect |Windows Azure Media Services | 
Microsoft Corporation
[cid:image001.png@01CDABBA.71FD8800]<http://www.windowsazure.com/media>

From: opendtv-bounce@xxxxxxxxxxxxx [mailto:opendtv-bounce@xxxxxxxxxxxxx] On 
Behalf Of Mike Tsinberg
Sent: Thursday, June 12, 2014 8:58 AM
To: opendtv@xxxxxxxxxxxxx
Subject: [opendtv] Common encoding formats

I would appreciate experienced opinion about HD encoding practices:

Is 8 bit a most common encoding grayscale resolution for streaming and 
broadcasting or 10 bit is used as well?

Also what about color subsampling? Is 4:2:0 most common or 4:2:2 is used as 
well?

Best Regards,
Mike Tsinberg
http://keydigital.com<http://keydigital.com/>



PNG image

Other related posts: