Hacker Newsnew | past | comments | ask | show | jobs | submitlogin

If you deinterlace an interlaced source before encoding you either discard information or store twice as much. And potentially people watching on an interlaced screen lose data - not all deinterlace/interlace pairings roundtrip cleanly. Better to store it in the original format, and then it can go through one round of deinterlacing on playback for those screens that need it, and none for those that don't.

(Also deinterlacing approaches get better over time - if a particular episode entered their catalogue 10 years ago and was deinterlaced using the state of the art approach at the time, it would look much worse than a modern deinterlace)



One of the overriding constraints on these sorts of video pipeline tasks is touching the input pixels as few times as possible. Every lossy transformation you do (cropping, scaling, color correction, transcoding) potentially introduces defects; and that's assuming that your lossless transformations (repackaging container formats, for instance) are bug-free.

I'd love to see some figures from Netflix' QC team; I bet at their scale they see all kinds of insane edge case problems.


Uhm, no, due to the way modern video formats work, you don't store twice as much data - on the contrary, H.264 and similar modern formats are significantly more efficient at storing progressive (including deinterlaced) video than equivalent interlaced stream.


Nope. Modern (and even ancient) video codecs can store interlaced data just as efficiently as progressive data - how could it be otherwise? But when you deinterlace e.g. a 30 frames per second interlaced source, either you store the result as 60 frames per second (twice as much data), or you lossily downsample to 30 frames per second.




Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: