Signal distribution / Control

Thoughts On AV Over IP

10/08/2017 7:16 PM Eastern
TAKE AWAY

The following essays are from Sound & Video Contractor’s twice-monthly AV Over IP newsletter.

To receive this newsletter subscribe

More AV Over IP

VIDEO AND IP TRANSPORT—NEW DIRECTIONS

It has never been a trivial task to get video over IP networks. The need to move text data brought about IP networks. However, when voice was successfully and efficiently moved over these networks, engineers began to question whether video could also be transported on them. Video conferencing and IPTV were built on the concept that, if the network was not prone to loss, IP using UDP (User Datagram Protocol) could effectively deliver video. Vidyo demonstrated that the Internet could transport video conferencing if scalable video coding was used at the source. However, things really changed when Netflix began streaming video. They used HTTP/TCP, which is similar to nearly all other web applications. Broadcasters and streaming server manufacturers began to use the method, now called adaptive bit rate (ABR).

Despite this, TCP has not kept up with changes in network technology as well as we would expect. Around 2010, Jim Gettys, then at Bell Labs, discovered that TCP performance was suffering from its inability to deal with large buffers in the network. TCP detects congestion by observing lost packets. It modifies its sending rate by lowering the rate. However, these large buffers indicate that TCP is slow to adjust and can occasionally drop the sending rate too severely. So, is the answer not to use TCP? Not necessarily. Seeing that the only other choice is UDP which is very sensitive to loss, transport over the Internet continues to be a problem.

Two new approaches have been suggested to deal with this dilemma. One comes from the IT industry; the other is offered within the AV industry. The first comes from Google. Their TCP BBR (bandwidth bottleneck & RTT) uses the idea that congestion can be detected by continually estimating the delivery rate of the flow. Their experiments have shown that TCP BBR provides about the same throughput as traditional TCP but with much lower latency. It will be implemented at the streaming server and will require that the client be TCP BBR aware. If the client is not using the protocol, it will perform as a traditional client. That is, the benefits of TCP BBR will not be realized.

The second new direction has been proposed by Haivison and Wowza and is being promoted through the group they’ve orgqanized called the SRT Alliance. Several dozen manufacturers have joined the alliance and are committed to support SRT protocol. SRT (Secure Reliable Transport) is based on UDP. So, it avoids the issues that have developed with TCP. It also has lower overhead than TCP. Combining Fast File Transfer with UDP retransmissions and control messages means that it appears to be like TCP. That is, it is a reliable protocol, virtually assuring delivery of the video. It can also adapt to bandwidth fluctuations.

Both new approaches seem technically solid. However, both need more field testing in order to face the real problems of the Internet. For example, how will each share bandwidth with other traffic such as data oriented cloud file transfers, routine business traffic in VPN tunnels, and adaptive bit rate streams? While these things can be difficult to simulate in the lab, they are not impossible. I believe such testing should be conducted and the test results made available to users. In the near future the IT community may require this testing before adopting these protocols.

WHAT IT TAKES TO CONNECT

To avoid problems when installing an AV device into an IP network, it is prudent to carefully consider the procedure the device uses when it connects to that network. Certain parameters are required to communicate with other devices. These are sometimes acquired automatically or may be configured manually.

Let’s begin our study with what the device must know. It must have a local hardware address for its physical interface to the network. This 48 bit mac address is assigned by the manufacturer of the interface and inserted at the factory. Messages are always sent from device address to device address. Also, the device will usually need an IP address, a subnet mask, and a local router’s IP address. Let’s say the manufacturer shipped your device configured to get these parameters automatically. This will be done using DHCP (Dynamic Host Configuration Protocol). When the device is powered up, it will send a broadcast DHCP request on the local network asking for a configuration assignment.

The response will come from a local DHCP server. Often, this server is installed in the local router. With a broadcast reply, the server will tell your device its IP address, subnet mask, and the router’s IP address. Some systems use the DHCP server to deliver other key resource servers such as a call manager, DNS server, or authentication server. Note that the DHCP request is sent as a local broadcast. This means the DHCP server must be on the same local subnet as the device, since broadcasts normally don’t pass through routers to other networks.

If your device isn’t configured to use DHCP, you must manually configure its IP address, subnet mask, and local router IP address. If the web is to be used for communications, you’ll have to insert a DNS server address. With one of the parameters, you should exercise caution. Make sure that you correctly pick the subnet mask. There is a tendency for installers to simply use 255.255.255.0. That is a common mask. However, it may be that the other devices on your network are using a different mask such as 255.0.0.0 or 255.255.255.192. If you use a mask that doesn’t match the rest of the devices on your network, you may pay a severe penalty in troubleshooting time while you have to detect why certain devices can communicate with their partners, while others can’t.

Some networks have devices attached that don’t have a local router address. In other words, they are configured with only an IP address and a mask. In such a design, communications with devices on other networks or the Internet is impossible. In some cases, devices discover their configuration automatically through a proprietary protocol. This happens with some audio devices. IP addresses aren’t necessary because all messages are sent on the local network station to station. These devices likewise cannot communicate with devices on other networks or use the Internet. Troubleshooting such systems can be difficult because common tools such as ping and tract route won’t work.

AV SECURITY CONCERNS

In this essay, my objective is to focus on security of AV devices. However, I’m going to use an indirect approach to this topic. First, I’ll discuss an internet protocol and service that is absolutely critical to the proper function of nearly all IP networks. It is especially critical to the use and function of the Internet. Then, we’ll turn our attention to the attack that denied the use of that resource for a period of time. Lastly, I’ll explain how AV devices played a critical part in the attack.

Domain Name Services (DNS) is vital to the operation of almost all IP networks. The only exception would be small isolated networks with a few devices that have no connection to other company networks or to the Internet. DNS capability is dependent on systems and devices across the globe. It is a stored, distributed database of names that relate or map those names to specific IP addresses. For example, I understand that the address 8.8.8.8 is related to Google because DNS has the network address for the server in its database. Now, how is this distributed database critical to each of us? When I go onto the web and click on an icon or type a name such as matrox.com into a browser address field, I have asked for a resource from a server. However, I don’t need the address of that server because DNS tells my browser it is at 138.11.2.65. So, that’s where my request packets are sent. However, most people are surprised to learn that when you visit a typical home page of a company or college, your browser sends 12-18 DNS queries to obtain all of the resource files necessary to build that page for you. In other words, take away the DNS capability and you can’t browse or get resources from the Internet. Think about the impact of this function on customers who are looking at your company’s web site for products or services.

Now, let’s turn our attention to an attack on DNS that was very disruptive. On October 16, 2016, a denial of service attack was launched on DYN, a major DNS service provider. This disruption caused many web sites to be nearly inaccessible for over two hours. Some of the companies affected included Fox News, Amazon, and Paypal. After a few hours, the attack was blocked, but it repeated two more times during the day. I have spent most of my career as a college professor. Therefore, I like multiple choice questions. Here’s one for you: The attack was primarily launched by

a. a clandestine, nation-sponsored group from the Pacific rim, probably North Korea.

b. disgruntled computer science students.

c. compromising a large number of cameras and other embedded system devices.

d. a former employee of Paypal.

e. compromising a major retailer’s payment server.

The answer is C. Some reports indicate that the botnet of cameras and devices may have exceeded 100,000. That is, the attack was launched because someone had control of this vast number of devices and could issue the attack command. Here’s the really scary part. The attack, named Mirai, was based on compromising the attack devices using a list of 60 common username/password combinations set as factory defaults. The passwords were never changed by the users and this made the devices vulnerable to the Mirai control server. The AV industry must stop shipping devices with default authentication combinations like admin/blank. A major step was taken by at least one manufacturer of cameras. They require the initial use of the camera to be dependent on changing the password. I believe that is a major step in the right direction. However, as users, we must be informed enough to change the default to something that is not simple to guess. It certainly doesn’t show good business operations to have it made public that our cameras were part of an attack that interrupted service to tens of millions of Internet users.

NETWORK ERRORS, LOSS, AND MITIGATION

Network errors can occur at many levels during transmission of data and files. They can happen at the physical level due to interference, reflections, defective transmitters, faulty receivers, or other factors. IP packets can be dropped if they are corrupted, incorrectly processed by switching devices, or contain invalid fields. However, the most common reason for packet loss is the overfilling of network buffers. In this newsletter, we will introduce you to many forms of errors, the methods used to mitigate them, and the impact they have on applications.

If you read basic data communications texts, most will describe a digital signal as a pattern of highs and lows that represent ones and zeros. Actually this is rarely the case. Most transmitted digital signals are complex waveforms that represent 2, 4, or 8 bits in each pattern. These patterns can be more difficult to correctly detect than if they were simple highs and lows. For example, a sender may send a waveform that represents 1101. If the receiver interprets the signal to be 0011, a three-bit error has occurred.

Physical level bit errors are measured by sending a stream of bits and determining how many are correctly received. The test is usually called a BER test (for bit error rate). Technicians often are redundant and refer to a BERT test. The result is usually a negative power of ten such as 10-6 or 10-8. The result 10-6 would be interpreted as 1 errored bit per 1,000,000 bits sent. Test devices that do a BER test have connections for each major cable type: coax, twisted pair, fiber, and so forth. These tests are usually run on a single link in the network.

On the other hand, packet loss is measured as a percent. Unlike the BER test, network engineers often measure packet loss at layer three, the IP level. They can send a series of packets with a test device or use software that reads the loss level from protocols such as RTCP (Realtime Control Protocol). RTCP is routinely used with VoIP and many forms of video.

The relationship between BER results and packet loss results is not straight forward. This is because packet lengths vary, generally from about 64 bytes to 1500 bytes. For example, a BER of 10-6 (one bit in 1,000,000 errored) will cause a packet loss rate of 0.1% with 125 byte packets. Each packet contains 1,000 bits. On the other hand, typical file transfer packets containing 1250 bytes will have an average loss of about 1%. A loss rate of 0.1% would not seriously impact most VoIP conversations. However, a loss rate of 1% would seriously deteriorate a digital signage flow.

Physical level errors are sometimes mitigated by using FEC (forward error correction). While the mathematics behind this method can be quite complex, the basic idea can be depicted by using an example. Suppose I have 64 bits (eight bytes) to send. First I determine the parity of each byte. That is, I determine whether there are an even or odd number of ones in the byte. I build a code that contains a one for each byte with odd parity and a zero for each byte that contains even parity. Now I repeat the process of parity determination using the 1st, 9th, 17th, 25th, etc. bits. I follow this by using the 2nd, 10th, 18th, etc. I continue this process starting with the 3rd bit, the 4th bits, and so forth until I’ve rechecked parity on all 64 bits. This process, one called the vertical redundancy check, allows me to develop a code of 16 parity bits. I append the message with this code and send it. The receiver repeats the parity check technique and can detect and correct all single bit errors. While I’ve added 2 bytes to an eight byte message, a 25% increase in overhead, I have a technique that will allow detection of 99% of all errors. It is important to realize that modern FEC techniques are much more complex and more efficient, but have lower overheard. However, in all of these techniques, the principle is the same: add a small level of additional bits as an error code and you make it possible for the receiver to fix any errors in transmission.

Want to read more stories like this?
Get our Free Newsletter Here!
Past Issues
October 2017

September 2017

August 2017

July 2017

June 2017

May 2017

April 2017

March 2017