Signal distribution / Control

Phil Hippensteel on Comparing Video Transport Methods: TCP vs. UDP

1/12/2017 5:18 PM Eastern

To receive this newsletter subscribe

More AV Over IP

One of the more challenging issues related to transporting video over IP networks is understanding the difference between the two transport protocols:  TCP (Transmission Control Protocol) and UDP (User Datagram Protocol).  While the difference is often overlooked, the choice between these two protocols very much determines how the video application will react to the quality, architecture and characteristics of the underlying physical network. This is especially true if the network is bursty, subject to loss or latency, or is poorly designed.  For example, a poor quality video experience may be due to network loss.  The choice to use TCP rather than UDP could remedy this problem problem.  On the other hand, if near real-time delivery is a requirement, UDP may be the better choice.

First, let’s look into each protocol.  UDP was developed to send short control messages using IP.  In the early days of telecommunications, these messages were called datagrams because they usually contained text rather than digitized voice.  The UDP datagram header contains only four fields: source and destination port numbers to identify the sending and receiving application processes, a length value, and an error check code. (Refer to Figure #1). 


The advantages of using UDP include

  • It has very low overhead (8 bytes).
  • It is easy to understand.
  • It allows the application developer to control the reaction to network conditions and changes in these conditions.

But UDP fails to provide for other important functions:

  • It makes no provision to retransmit loss data.
  • It has control over the sending rate of the application. Consequently the network connection can be overrun
  • Packets arriving with the incorrect error check are simply dropped.

Nevertheless, VoIP was developed using UDP and was one of the first applications to place large volumes of audio material on enterprise networks. Voice packets generally contained 20 ms. of digitized voice, a short RTP (Real-Time Protocol – to be covered in a future newsletter), the UDP header and the IP header.  These were sent to the network at the fixed rate of 50 packets per second.  No provision was made to accommodate for interfering traffic or network loss of delay. Yet, through time it proved to be a valuable method of delivery.

Using UDP with Video

When video conferencing (VC) engineers studied how VoIP worked, it was a simple extension to place video in the data field of the IP/UDP/RTP packets (See Figure #2).   However, since video packets were generally much larger, rapid rate of transmission required much more bandwidth.  In addition, VC traffic could interfere with other traffic flows and introduce degradation of their corresponding applications.   It took some time until network engineers and VC vendors worked out some of the problems.  However, as we now know, it all seems to work and the use of video conferencing has become more and more important to employees and consumers.

When cable companies looked at the use of IP, they saw some tremendous advantages.  Generally called IPTV, the method used the IP/UDP format but formatted the data field using a standard method borrowed from satellite transmission called MPEG -2 Transport System.   This made the core network for the cable companies much more flexible and provided for added features such  as redundant paths.

TCP Characteristics

Video transported with TCP will react very differently to various network conditions.  This more complex algorithm was intentionally designed to automatically adapt to changes in the network. 

It provides these advantages:

  • It recognizes when packets have been dropped and provides for retransmission of those packets.
  • It adjusts to network changes including
    • latency
    • packet loss rate
    • levels of competing traffic
    • available bandwidth.

Consequently, when video is transported using TCP, the only critical issue is whether the video is delivered at a rate that is equivalent to or higher than the play out bit rate.  For example, if the receiver is playing a video at 4 Mb/sec, the TCP delivery rate must be at or above that level.  A receive buffer accommodates for fluctuations in traffic delivery rates. Consequently, it is the average rate at which the video is delivered that is critical.

TCP transport also has some disadvantages.  These include:

  • Introducing delay primarily caused by the receive buffer.
  • Some forms of TCP video will degrade when the delivery rate falls too low for more than 8-10 seconds. 

The second point happens occasionally with ABR (adaptive bit rate) video.  Examples include Netflix, Hulu and other web streaming services.

In several of our future newsletters, we’ll delve into more detail about the network impairments delay, loss and jitter and how they affect each type of video.

Want to read more stories like this?
Get our Free Newsletter Here!
Past Issues
November 2016

October 2016

September 2016

August 2016

December 2015

November 2015

October 2015

September 2015