Jun 12, 2008

How video works

Many years ago, some clever folks discovered that they could exploit something known as persistence of vision to create the illusion of moving pictures ("movies"). When a series of still images is projected at a fast-enough rate, the human brain "fills in the blanks" and perceives the result as continuous motion. The threshold is approximately 20 frames per second. Anything faster appears to be continuous, and anything slower is perceived as discrete frames (or at least jerky motion).

Fast forward a few decades, and along comes television, which used the same theory, but instead of projecting light through film, television frames were electronic signals that could be broadcast long distances. To create a frame of video, the image is divided into horizontal lines, and each line is scanned inside a video camera and converted into an electronic signal. This electronic signal is broadcast and received by a television, which takes the electronic signal and shoots it at the television screen, line by line.

Of course, all this scanning and projecting happens very quickly. We need to view at least 20 frames per second to see continuous motion. With television, they had to choose a higher frame rate because the frames were drawn line by line, which took some time. Thirty frames per second was chosen as the standard, which was exactly half 60 cycles per second, the oscillating frequency of AC current in the United States.

But there was a problem. Early television technology was limited, and it was discovered that 30 frames per second appeared to flicker every so slightly. The obvious solution was to increase the frame rate, but technology at the time just couldn't handle a higher frame (and hence data) rate. Some sort of compromise had to be reached. The solution was interlacing.

Instead of sending 30 discrete frames per second, each frame of video is divided into two fields, one consisting of the odd numbered lines, the other of the even numbered lines. The fields are scanned alternately, odds and then evens, broadcast, and then projected onto the television screen (see Figure 1).


Figure 1: This figure shows how interlacing works.


Each frame of video contains two fields, which are broadcast at 60 fields per second. Because each field contains exactly half the data of the original frame, the data rate or bandwidth of the signal was the same as the original 30 frames per second non-interlaced signal. Interlacing solved the flicker problem, and thus a standard was born. The National Television Standards Committee (NTSC) standard was for 30 frames of interlaced video per second, divided into 525 lines of resolution, of which approximately 480 lines of resolution are visible. The remaining lines are known as the vertical blanking interval (VBI) and are used for synchronization (and later closed captions).

We'll see later that interlacing is no longer necessary using modern display technology. Computer monitors are not interlaced; they're known as progressive scan devices. For those of you interested in high definition television, this is why there are a plethora of standards in the HD world, both interlaced and progressive. Converting between the two display methods can create problems that we'll have to deal with later.

No comments: