Jul 5, 2008

Display Technology Differences

Television screens display images using a completely different technology than computer monitors. This is unfortunate because it leads to problems when trying to display video on a computer screen. However, it also can be a blessing, because television technology is nearly 100 years old, and much better technology is now available. The problem is that for the foreseeable future, we're caught between the two, shooting with cameras that are designed to record video in the NTSC/PAL (television) standard, and distributing our video on the Internet to be viewed on modern displays.

Interlaced versus progressive displays
Each frame of video is divided into two fields, one consisting of the odd lines of the image and the other the even lines. These two fields are scanned and displayed in series. So television actually is 60 fields per second, which we see as continual motion.

Computer monitors, whether they're cathode ray tube (CRT) or liquid crystal display (LCD), are progressive monitors. Each frame of video is drawn from left to right, top to bottom. There are no fields. The problems appear when we try to create a single frame of progressive video from two fields of interlaced video (see Figure 1).


Figure 1: Converting two fields of interlaced video with significant horizontal motion to a single frame of progressive video can be problematic.


In Figure 1, a minivan is driving past the camera. During the split second between the first and second field scans, the minivan has moved across the frame. When this video is displayed on an interlaced display, it appears normal, because the second field is displayed a split second after the first. However, if we try to combine these two fields of interlaced video into a single progressive frame, interlacing artifacts appear because of the horizontal motion. The front edge of the minivan is "feathered," and both tires are a blur. At either the editing or the encoding phase, something must be done to deal with this problem.

Color spaces
Television and computer monitors encode color information differently. Television signals are encoded in terms of luminance and chrominance (YUV encoding); computer monitor signals are encoded in terms of the amount of red, blue, and green in each pixel (RGB encoding). We also watch them in different environments. Televisions are generally viewed in somewhat dim surroundings, whereas computer monitors are generally in bright areas. The combination of these factors means that content created for one environment doesn't look right when displayed on the other.

Digitized NTSC video looks dull and washed out when displayed on a computer monitor. Video that has been processed to look right on a computer monitor looks too bright and saturated (colorful) when displayed on an NTSC monitor. The non-compatibility between the two display technologies makes it problematic to create high-quality video, particularly if you want to display your content on both. If you're producing content for both broadcast and the Internet, at some point your project must split into two separate projects. After you start processing a video signal for display on a computer monitor, you won't be able to display it on a TV monitor.

Tip The best way to manage this issue is to work exclusively in the broadcast space during your digitizing and editing phases. Archive your masters in a broadcast format. Don't do your post-processing for Internet viewing until the encoding phase, or at least after all your editing has been done and you have a broadcast-quality master. That way, you always have a version of your video that can be broadcast or burned to DVD. Create a special version that is intended for Internet-only consumption. As new formats evolve, you can always re-encode from your broadcast-quality master.

No comments: