Your browser is out-of-date!

Update your browser to view this website correctly. Update my browser now


The Video Food Chain: Designing video systems with emphasis upon the whole scheme of video quality yields high-quality results.

The term "video" used to mean only one thing-television. Many people did not know the term until the availability of video cassette recorders and camcorders.

The Video Food Chain: Designing video systems with emphasis upon the wholescheme of video quality yields high-quality results.

May 1, 1998 12:00 PM,
Steve Somers

The term “video” used to mean only one thing-television. Many people didnot know the term until the availability of video cassette recorders andcamcorders. Even then, video is still just one cable connection along withthe audio connection. Consequently, it’s fairly simple for almost everyoneand, in reality, it has provided some reasonable results.

Officially, it is composite video, but where does that fit into the wholescheme of video quality? Or, in other words, where does composite videoreside in what I call the video food chain? That brings us to the point ofthis article-identifying the type of video connections that will providethe best quality.

Noble beginnings Composite video represents the first detected basebandform of picture information in a television receiver after the RF tuner.Initially, buying a television receiver with an auxiliary video inputrepresented a significant cost upgrade for receiver manufacturers.Eventually, it became necessary to add composite video to new receivers tosupport the growing market for an era of new, amateur videographers. It didnot take long before the consumer became dissatisfied with using anoutboard modulator to get an RF connection into the television set. Thequality is low at best.

Thanks to the VCR manufacturing industry, the consumer slowly embracedS-video through the marketing of the S-VHS VCR. Of course, themanufacturers were forced to add the S-video connection to take advantageof the new technology available in tape recording. Building a new taperecorder with additional bandwidth that would be thrown away by re-encodingthe video signal back to composite made no sense at all. Because the chromainformation and video information are recorded as two signals on tape, whynot bring it off tape the same way?

This brings us closer in quality to the original source material if it werepre-recorded in S-VHS mode, but recording from regular RF or compositevideo feeds required reliance on the quality of the VCR decoder. Therefore,S-VHS did not catch on instantly because most consumers owned televisionswithout an S-video input connection. It has taken many years for thevarious appliance manufacturers to catch up with the market. The S-VHSconcept is now about ten to 15 years old and has just really caught on inrecent years.

Today, DVD is the new medium for video. Composite and S-video are theminimum output complement. Consumers are now aware of the merits ofS-video, and many own televisions with the S-video input and output. A fewDVD players, however, are equipped with component video connections. Willthere ever be an end to all the new formats and connections?

History is the best teacher It seems as though this could go on forever.What we would really like to have in our system are the original signalsused to create the image, meaning the signal from the three light sensorsin the camera recording the image. In other words, the RGB components wouldbe best.

Since the beginning of color television and colorimetry theory, it is knownthat all visible colors within the spectrum of light can be constructedwith the three primary colors-red, green and blue. That is why they arereferred to as primary. When mixed in equal amounts of energy, theycollectively produce white light.

After the concept of color television moved on from using a rotating colorwheel to create the illusion of a color image, the method for creation ofcolor images developed into the electronic format we use today-the NTSCsystem. NTSC stands for the National Television Systems Committee (someaffectionately reference it as “Never Twice the Same Color”, but that’sanother story), which developed the mathematics and structure forelectronically sensing and transmitting the color image. A key facet of theNTSC system is that it had to maintain compatibility with existingmonochrome televisions already in the consumer market. Realize that by thetime color television reached the market, monochrome television had about a15-year head start. The method used by the NTSC to accomplish compatibilityis quite clever.

The NTSC system must be able to pack the color information from threechannels into one signal that could be transmitted over the airwaves forbroadcasting to the user within a limited, defined bandwidth. The bandwidthrequirement of the three RGB channels greatly exceeds the frequencyallocations for monochrome television. Although terrestrial broadcastingsteered development of the television signal, the end result allowed thepicture information to be carried over one wire.

Now, this takes us to an interesting point in the video food chain. If westart at the camera where the image is created, the proper hierarchy ofvideo connection for best quality becomes very clear. In Figure 1, you cansee that we can directly determine the order of good quality imagery. Justas a little error (or a lot, as the case may be) is added by each person inthe chain of storytelling, error or noise is added by each process throughwhich the video signal passes. The origin of the image is an RGB imagecollected by three light sensing elements, typically charge coupled devices(CCD). Each CCD is able to see a separate color by the addition of aspecial filter positioned in front of it, one for each of the primaries.

It is at the RGB point in the video chain that we obtain the highest imagequality. Similarly, in computers where graphic images are created, theimagery is maintained in separate memory planes or RGB memory planes. Thegraphics are output in their simplest form, RGB, once converted from adigital representation to an analog representation. This is why highperformance presentation systems use RGB signal feeds from the sourcethrough an RGB distribution system to the display. I refer now to RGB feedsfor the purpose of describing the basics of video signals.

The video food chain By now, it should be obvious that we can create atable for the video food chain. We start with RGBHV at the top and proceeddown the list to composite video. We can see some of the same format pointsas in Figure 1 with the camera system. Note that RGB video precedescomponent video, which precedes S-video, which, in turn, precedes compositevideo. Decoding requirements are listed for each step in the chain. TheNTSC decoder is nearly a mirror image of the camera encoder. Although somenoise or distortion may be added in the encoding process, the ability todisassemble the composite NTSC signal into its RGB components is moredifficult and prone to error.

Within the RGB formats, the only issue is sync processing, but do not bemisguided by thinking that sync processing is a minor issue. In thetelevision world, sync construction is carefully specified, and there aremany circuit designs and systems that handle composite sync very well. Itis in the computer community where the caveat remains.

There were never any significant standards established for constructingcomposite sync for the myriad of computer signal formats. Although it ispossible to construct composite sync (and many computer manufacturers usedit for years) for computer graphic outputs, the details of construction canbe troublesome for many displays and projectors. Furthermore, removal ofsync from the green channel, in the case of RGsB signals, can affect theperformance of the green channel or the proper operation of the display’sblack level controls. Although this aspect is not an issue with the designof purely NTSC video distribution systems, it is an essential considerationif the integration of computer graphics is anticipated.

The component video format refers to the intermediate three elements usedto construct the composite video signal, namely, Y (luminance) channel, R-Y(Red minus Y) channel and B-Y (Blue minus Y) channel. Decoding this formatrequires only a good video matrix design. A video matrix is, in itssimplest form, a resistive interconnection of each signal channel to theother in a combination that yields the algebraic sum of the differencechannels (R-Y and B-Y) to produce the missing channel G-Y along withmixture of the three difference channels with the Y channel to yield R, Gand B channels. The advantages that component video holds over RGB is thatthe same information may be transmitted, generated or stored in lessbandwidth than three RGB channels, and the presence of composite sync onthe Y channel ensures that any sync processing anomalies will bedistributed evenly among the channels upon exiting the matrix.

The S-video format represents color imagery in two channels of information.The brightness and detail information are conveyed by the Y channel. Allchroma information is contained within the C channel. The C channelconsists of the phase and amplitude encoded subcarrier, which representsthe color portion of the image. This signal is accompanied by thesubcarrier burst sample. The main advantage to the S-video signal is thatluminance information is already separated from chrominance information.Any system using S-video must have a good synchronous detector to recoverthe chroma difference signals (R-Y and B-Y) used in the matrix to createthe RGB image.

Decoding the composite video signal requires the most complex scheme. Theprimary difficulty is in recovering or separating the luminance informationfrom the chrominance information. In low-cost systems, separation isaccomplished by the notch filter method. In newer systems, varieties ofcomb filtering are used with improved results. The goal is to remove orcomb out the luminance without affecting the quality of the chroma, andvice versa. Combing is possible due to the method in which the chromainformation is interleaved with the luminance information during theencoding process. The energy associated with these Y and C componentsoccupies different RF spectrum space by design. Full NTSC decoding is ademanding task, and finding a high-quality decoder is the challenge. Thelast thing one should do is pass a video signal from one appliance to thenext, thereby requiring each appliance to decode and re-encode the signal.

No free lunch As with virtually all systems, it is much easier to combinecomponents or ingredients to attain the required result than the converseof taking apart something to determine its makeup. This concept holds truefor all video processing equipment. There are inherent performance lossesin any signal processing system. Degradation is accumulative, and onceinformation is lost or degraded, it cannot be recovered.

A key situation to avoid is redundant video processing. Redundantprocessing is the practice of passing video from one RF or composite videoport to another. This practice requires the individual components of thesystem to decode the video signal, process it and re-encode it to send itout the composite port. This situation occurs predominantly with the use ofVCRs. Redundant processing is a lengthy path, which degrades videoperformance significantly.

The old adage “you get what you pay for” has definite relevance withrespect to video processing. The decoding task is daunting, and not manyequipment and display manufacturers devote the time or development expenseto this complex process.Of all the steps required to return from compositevideo to RGB, the process of separating luminance (Y channel)fromchrominance (C channel) is the most difficult. This tells us that weshould, at a minimum, distribute S-video signals if not component or RGB.Take a look at Figure 3 for the concept of the ideal video distributionapproach.

Determine which piece of equipment in the video system provides the bestvideo processing (decoding) and concentrate all or as much of the videorouting through that unit. For example, if the display device in use has anexcellent NTSC decoder, then designing a composite video distributionsystem that switches (routes) all composite video to the display willprovide good results. In many cases, the display does not have the bestNTSC processor. Displays may be optimized for RGB data display, forexample. In this case, process the video through an external NTSC decoder,high-quality line doubler or quadrupler.

Maintaining video quality By now, the picture for providing the bestquality video signal should be getting clearer. Remember, when designing adistributed video system, route video in the highest quality format thatboth cost and utility will allow. Figure 4 shows a system design limitedto composite video feeds based on the lowest performance equipment item. Ofcourse, one need only connect the components by routing only one cable.That cuts the cost overhead and complexity; however, look at Figure 5 wherethe system performance will be improved markedly with an S-videodistribution. Cost will increase modestly for cabling, and one externaldecoder product is required for the VCR. A prudent choice for the VCR wouldbe to select one with S-video output, if possible.

Review the outputs available from the video-delivering components in thesystem. Better yet, if designing from scratch, select only components thatwill deliver the quality you desire. Most consumer-grade equipment offersat least S-video output today. A close look into the higher grades of DVDplayers will yield the ability to route component video, which is betteryet. In some instances, managing component video may not be convenientbecause many display and projector manufacturers do not yet support it. Ofcourse, there are high-end video processors that accept component inputalong with the other formats and provide the desirable wideband RGB signalfeed.

The ideal would be to convert incoming video to full bandwidth RGB signalsat the head-end and route only RGB. This, of course, means that the systemmust use a high-performance, multi-input processor such as a line doubler,line quadrupler or image scaler. A high-performance processor can be thekey to a great system design in terms of video quality, in spite of thehigher cost.

In sum, the video equipment realm is mix and match with respect to theoutput formats available. The key to maintaining high video quality in anysystem is to position the design as high on the video food chain aspossible. As we proceed into the era of digital processing systems in theprofessional and consumer venues, virtually any output format is availableto us. Ones and zeroes can be converted to any desirable video feed. Infact, we can look forward to distributing digital data from source todisplay in the not-too-distant future. The future opens a whole new horizonto the video quality available for new system designs.

Featured Articles