Your browser is out-of-date!

Update your browser to view this website correctly. Update my browser now

×

Digital Video for AV Integrators

When you transform AV into digital bits and bytes, it opens up a world of possibilities?and a few challenges. Here is everything you need to know about moving digital video around an AV installation.

Digital Video for AV Integrators

When you transform AV into digital bits and bytes, it opens up a world of possibilities?and a few challenges. Here is everything you need to know about moving digital video around an AV installation.

In the early 1990s, a number of organizations jockeyed to have their standards adopted for the next generation of television. The Advanced Television Systems Committee (ATSC) weighed proposals for a number of analog high-definition TV systems, one of which was the Japanese MUSE satellite broadcasting format. But several forward-thinking companies decided that the time was right to go digital. Led by General Instrument, a consortium known as the Grand Alliance combined the best features from the proposals that they had created individually and convinced the FCC to switch from analog to “ones and zeroes” for television in the 21st century. And the impact from that decision would be felt way beyond traditional TV.

A little more than 20 years ago, the Federal Communications Commission set in motion a plan to overhaul the television broadcasting system in the United States. And much like a pebble dropped in a pond, that initial splash has turned into a set of ripples that has completely changed the acquisition, editing, production, and distribution of video.

Two decades later, digital video is firmly entrenched in every aspect of our daily lives. The terrestrial broadcasting system went digital last year. Direct broadcast satellite companies such as DirecTV and Dish Network have been all-digital for some time. Cable TV companies are shutting down analog channels and converting the rest of their operations to digital as fast as they can. And digital TV is the driver behind AT&T’s U-Verse and Verizon’s FiOS networks.

You may not realize just how pervasive digital communication has become. From DVDs and Blu-ray discs to iPods and iPads, from TiVo and Boxee to YouTube and Hulu, from Netflix streaming to Amazon Unbox, digital video is everywhere. Have you bought a camcorder recently? It’s digital. Listened to HD Radio? Digital.

If you’ve attended the National Association of Broadcasters’ trade show in the past decade, you’ve seen first-hand the migration away from analog to digital. Video streaming, for example, largely derided as a gimmick back in the late 1990s, is now an accepted distribution method. Now, digital video is not only knocking at the door of the pro AV industry; it has kicked the door down and is rushing in to replace our 20th-century distribution systems. Did you notice all of the companies exhibiting video encoders at InfoComm 2010? How about fiber optic and HD-SDI switching and distribution amplifiers?

The question is: When are you going to start using these tools? Digital video makes the impractical practical. Distributed video over digital signage, looped networks with independent nodes and controllers, single-cable installations, independently served displays with on-demand content are all possible.

We know full well that before you’re comfortable using digital video technology, you need to speak its special language, including how it’s encoded, compressed, stamped, multiplexed, distributed, recorded, and played back. So get your highlighters ready. Here’s digital video, soup to nuts.

1 2345Next

Digital Video for AV Integrators

When you transform AV into digital bits and bytes, it opens up a world of possibilities?and a few challenges. Here is everything you need to know about moving digital video around an AV installation.

IN THE BEGINNING

It all starts with video images captured from our analog world by special sensors known as charge-coupled devices (CCDs) and complementary metal oxide semiconductors (CMOS). The output voltages from these sensors are quantized into strings of bits that correspond to relative voltage levels created by bright and dark areas of an image. By taking more and more samples of the red, green, and blue video signals (increasing the bit depth), the analog-to-digital conversion can more accurately reproduce those signals when they are ready to be viewed.

Why is that important? Early flat-panel displays commonly used eight bits per pixel, or 256 red x 256 green x 256 blue pixels. That equals a total of 16.7 million possible colors, which would seem like more than enough. But it really isn’t. Images sampled at this bit rate often exhibit abrupt changes between shades of colors that resemble contours on a topographic map, creating unwanted image artifacts. That’s why video cameras sample at greater bit depths, and it’s also why professional flat-panel displays are moving to 10-bits-per-pixel sampling to create smoother gradients.

One thing that’s interesting about digital component video (YCbCr) is that the brightness (luminance) signal contains most of the picture detail. That means that we can sample the color information at half the rate, or even lower. So if we determine that four samples are needed of luminance, but only two samples of each of the color difference signals are required, we come up with the ratio 4:2:2which happens to be a very common digital format for professional video (one of which is the ITU standard BT.601).

In contrast, digital cinema cameras capture video in a red, green, and blue (RGB) format and must preserve as much detail in each color channel as possible. Accordingly, these high-performance cameras use a 4:4:4 sampling ratio, which results in extremely large files.

On the other hand, digital TV programs on cable and satellite as well as movies recorded to DVD use a 4:2:0 sampling ratio, reducing the color detail by half again from the 4:2:2 standard to conserve bandwidth. (And you probably didn’t even notice.)

PACK AND SHIP

The whole concept of digital video revolves around the idea of redundancythat is, redundancy between adjacent frames of video. If we shoot one second of video (30 frames interlaced, or 60 frames progressive-scan), there are bound to be parts of each frame that don’t change, or change only a little over time.

If we can come up with a system that analyzes each frame of video and identifies the parts that change versus the parts that don’t, we can record and distribute that video quite efficientlymuch more efficiently than if the video signal was analog, where each frame is repeated with any and all redundancies and the full bandwidth of a TV channel is required to pass the signal, no matter its resolution.

And that’s exactly how a video codec works. In everyday use, we speak of codecs being a system by which a video stream is encoded and decoded, using MPEG, JPEG, or wavelet processes. But the piece of hardware that we use to perform the compression is a video encoder.

There are several ways to compress video signals, but the most common is based on a principle known as discrete cosine transform (DCT). In a nutshell, DCT reduces the elements of a video image to mathematical coefficients. It is the heart of both the JPEG (Joint Photographic Experts Group) and MPEG (Moving Pictures Experts Group) standards, and is widely used for encoding everything from videos that you shoot on your $150 digital camcorder to those you shoot on a $10,000 broadcast camera.

While JPEG is used primarily for still images and digital cinema, MPEG is the standard for almost all compressed video. The MPEG system starts with a string of video frames, known as a group of pictures (GOP), which can be almost any length, but is typically 15 frames long, or a half-second.

The first video frame in the sequence has all of the information necessary to compress the frames that follow, and is known as an intracoded frame, or I-frame for short. (I-frames can also be called key frames.) Each I-frame has all of the picture information encoded into eight-pixel-by-eight-pixel blocks.

The second MPEG frame type, the predictive frame (P-frame) looks at the data in a previous I-frame and determines what is actually changing between two adjacent frames. Elements that change in position, color, or luminance are re-encoded, while elements that do not change are simply repeated. This allows for even greater compression of the video signal.

A third frame type, the bi-directional predictive frame (B-frame) looks forwards and backwards at reference frames to determine which pixels need to be re-encoded as new, and which pixels can be repeated.

Clearly, I-frames are critical to digital video playback. If the system drops an I-frame, the picture freezes up or disappears altogether until the next I-frame comes along. This is what commonly causes drop-out on satellite and terrestrial DTV signalsthe decoder can’t resume converting the compressed files to video until it has another reference point.

Previous1 2 345Next

Digital Video for AV Integrators

When you transform AV into digital bits and bytes, it opens up a world of possibilities?and a few challenges. Here is everything you need to know about moving digital video around an AV installation.

IT’S ALL IN THE NUMBERS

There are two MPEG standards in wide use today for video compression. The first is MPEG-2, which is the basis for encoding DVDs and cable, satellite, and digital terrestrial TV broadcasts. MPEG-2 has been around for almost 20 years, and has done a good job, but there are practical limits to how much a video signal can be compressed using the MPEG-2 system.

To give you some idea of just how much MPEG compression is typically used, a high-definition television program in the 1920×1080 interlaced HDTV format (1080i) has an uncompressed bit rate of 995 megabits per second (Mbps). That would be impossible to fit into a standard TV channel, let alone on an optical disc. Therefore MPEG-2 compression, which can be done in several different flavors, packs that 1080i HDTV show down to about 18 Mbps so it can be transmitted to your TV in a 6-MHz-wide TV channel. That’s a compression ratio of about 60:1, and yet the picture quality you see at that bit rate is very good.

The same thing applies to a DVD, which plays at the level of standard definition TV (SDTV). Video encoded in the 720×480 interlaced SDTV format (480i) is packed down from its uncompressed bit rate (about 270 Mbps) to around 4 Mbps. Yet again, picture quality is as good or better than anything you’d see on an analog TV set. That’s because MPEG compression is considered a lossless codec. In theory, the original signal can be faithfully reproduced during the decompression stage. In reality, there’s always some loss in the encoding and decoding process, and you may observe odd-looking rectangular picture artifacts known as macroblock artifacts.

While MPEG-2 has served us well, there’s room for improvement. A newer version of the MPEG standard (MPEG-4) has come into wide usage for private video networks, consumer video devices, and Internet video streaming.

The version of MPEG-4 commonly used for video compression is known as H.264, or Advanced Video Codec (AVC), which is an extension of the original MPEG-4 standard. MPEG-4 uses the same I-, P-, and B-frames that MPEG-2 uses, but it adds the ability to compress streams down to the subpixel (SP) level and can look forward and backward at an unlimited number of frames for predictive purposes.

In theory, those features should allow for even greater compression of a video signal. And they do, as proven in tests by the European Broadcast Union (EBU) a few years back. In the EBU tests, HDTV signals could be compressed an additional 50 percent over the acceptable bit rate for MPEG-2, which made it possible for European broadcasters to move ahead with a single HDTV broadcast standard: 1920×1080 pixels with progressive scan, at a picture refresh rate of 50 Hz.

The increased efficiency of MPEG-4 hasn’t escaped content distributors in the U.S. Satellite TV companies are in the midst of a switchover to MPEG-4 from MPEG-2, and cable TV companies will follow soon. The 3D TV channels coming to cable TV will launch with MPEG-2 and move to MPEG-4 by 2011.

Video streaming services such as Netflix and Vudu make exclusive use of MPEG-4 encoding. So does YouTube and just about every camcorder sold these days. Blu-ray discs, which were initially encoded using MPEG-2 compression, have all moved to MPEG-4 coding to pack more data into a program and improve picture quality by delivering movies in a 1920×1080-pixel, progressive-scan format.

BETTER LATE THAN NEVER

You’re probably wondering how it’s possible for codecs to go back and forth in real time to do all of these predictive calculations. The answer is latency, or a delay built in to the video encoder (and your video decoder) that allows for all of these mathematical calculations to take place.

If you were to watch an analog TV program in real time next to one that is being encoded and compressed, you’d see up to a two-second delay between the analog video playback and the digital version. That time interval is required for the codec to do its thing. Some broadcast compression systems that use lots of what is called forward error correction (FEC) to make up for “dropped” bits can add several seconds of latency.

Ordinarily, latency is not an issue when distributing digital video programs, unless there are audio latency issues (lip-sync problems). If the same digital video stream is feeding multiple TVs or monitors, then there will be no time offsets between any of the sets unless their internal video decoders are having problems.

That said, there are wavelet-based, proprietary compression schemes that aim to eliminate virtually all possible latency, particularly where the network is tightly controlled and supporting mission-critical, real-time video applications, such as command and control. The Pure3 codec, developed by Electrosonic and recently sold to Extron Electronics, is one such proprietary format.

Previous12 3 45Next

Digital Video for AV Integrators

When you transform AV into digital bits and bytes, it opens up a world of possibilities?and a few challenges. Here is everything you need to know about moving digital video around an AV installation.

SOME ASSEMBLY REQUIRED

Having created and encoded a digital video (and audio) stream, it’s time to do what AV pros do best: get those bits from point A to point B. Unlike analog video systems, digital video travels in a single cable, with the color, luminance, sync, audio, and data all flowing together. That’s an efficient way to move things around, but how do you separate the different bits from each other at the receiving end?

The answer is to brand each string of video bits as a packet, and to give each packet a unique serial number, or packet ID (PID). In a digital TV broadcast, for instance, there will be three kinds of PIDs: video, audio, and clock (officially known as the Program Clock Reference, or PCR). The clock is merely the synchronizing information, but it is critical to reassembling the parts of the program.

OK, so now we have a bin full of packets that appear identical in length (188 bytes), each stamped with a unique serial number, expressed as a hexadecimal code. And we can move that bin as a stream of packets from a transmitter to a receiver at some predetermined speed, or bit rate. How do we sort out the packets and rebuild them back into a program?

The solution is to provide a parts list and assembly instructions to the digital video receiver. In the language of MPEG, these are known as the Program Association Table (PAT) and Program Map Table (PMT), respectively.

The PAT lists all the MPEG packet IDs that are contained in a bit stream, including video, audio, and clock PIDs. The PMT tells the receiver which PIDs go together to make up a complete program. All the MPEG decoder has to do is sort the PIDs into the appropriate bins at high speed, fast enough to deliver uninterrupted video and audio to your TV set.

If you think about it, the PID system is a tremendous improvement over analog video. You can combine different audio PIDs with a single video PID to support multiple language tracks. Or you can provide a separate video PID to support closed captions, or standard- and high-definition versions of the same program. Or you can have both stereo and surround-sound audio tracks in the same stream. Just change the PIDs and you’re enjoying 5.1 or 7.1 surround sound effects.

It’s important to note that the bit rate doesn’t change, nor does the channel width, as you mix and match PIDs. You simply create room for more video programs by parceling out the available bits. The trade-off comes in image quality: You can’t expect to jam two HDTV programs and two SDTV programs into the same stream and expect any of those programs to look very good. (But that doesn’t stop DTV service providers from trying.)

PICK A PIPE, ANY PIPE

Because the structure of a digital video is simple (it’s just a stream of data), you can use just about any type of cabling to pipe the signals where you want them to go. Coaxial cable works just fine; so does Cat-5 cable. Optical fiber is even better.

The key is to ensure you have sufficient bandwidth to pass the signals. Depending on the bit rate, you may need bandwidth in the range of several hundred megahertz and possibly a gigahertz to get the job done. That’s a radical departure from designing a video system the old-fashioned analog way. With digital, you’re not so concerned about handling and switching different video signal formats (they’re all component, by the way) as you are with bandwidth and bit rates.

One commonly used digital video transport standard is the Serial Digital Interface (SDI). SDI was developed to work with ITU BT.601 video (for encoding interlaced analog video in digital form) and has a maximum data rate in the range of 360 Mbps. High-definition video requires more bits, so a variant known as HD-SDI came into existence with a data rate of about 1.3 gigabits per second (Gbps).

For production and transport of 1920×1080 progressive-scan video at a frame rate of 60 Hz, a dual-link HD-SDI connection is normally used. This connection can pass data at up to 3 Gbps. And these days, a new single-wire 3G-SDI system is set to replace dual-link. Needless to say, 3G connections make heavy use of optical fiber cable whenever possible.

Previous123 4 5Next

Digital Video for AV Integrators

When you transform AV into digital bits and bytes, it opens up a world of possibilities?and a few challenges. Here is everything you need to know about moving digital video around an AV installation.

THE OLD SOFT SWITCHEROO

Now here’s why digital video presents such an extreme paradigm shift to the pro AV industry: When you switch between digital video signals, such as changing channels on a TV or signals on a routing switcher, you are merely switching from one stream of data to another, or from one cluster of packets to others in the same stream. You are not making or breaking physical contact, nor are you tuning in a different RF channel.

That’s a departure from the more familiar analog video switching process, which must physically make and break contacts for up to five discrete signals (red, green, blue, and horizontal and vertical sync), compensating for any changes in signal amplitude and phase that might occur along the way. Digital video switching is more like dipping into a stream and grabbing the packets you need. All the packets are present in the stream, but the video receiver only looks for those that apply and conform to the MPEG tables that it’s working with. In InfoComm classes, we refer to this process as software-based video switching. There are no issues with sync-pulse degradation or signal phase to deal withthat’s all handled in the display during the digital-to-analog conversion process.

What is critical in any digital signaling system is the signal-to-noise ratio. If the level of the digital signal drops too low relative to any noise in the distribution system, it will abruptly disappear. This phenomenon is well known to satellite and terrestrial DTV viewers as the “cliff effect.” It’s as if the signal suddenly fell off a cliff!

Fortunately, digital video systems include some degree of protection against signal dropout, usually in the form of forward error correction. A well designed digital video transport system assumes that not all bits will make it through intact and therefore adds in some redundant bits to ensure a high quality of service (QoS).

WRAPPING UP

There are many ways that digital video can be implemented, one being a private video network using Internet Protocol (IP) packet headers to feed digital signage displays. In that case, all MPEG programs are present in the stream (or can be called up on demand) and sent to any or all connected displays by using their discrete IP addresses.

Need to add more monitors? Plug them in and connect them to the network. No need to pull additional cabling other than the connection to a router. AT&T’s U-Verse system works as a pure IPTV network, and their set-top boxes switch between TV channels by using MPEG program numbers and IP headers. This is just another example of software-based video switching. And U-Verse can work over wired and wireless networks, too.

Sounds intriguing, right? So when are you going to start using digital video?

Senior contributing editor Pete Putman was InfoComm’s 2008 Educator of the Year. His 2010 InfoComm Academy sessions included “Digital Video 201” and “Practical RF for System Integrators.”

Previous1234 5

Featured Articles

Close