Buffers in the network are critical for the delivery of audio and video. We often don’t think about the role they play. If they are properly sized and managed, they improve the delivery of our media. On the other hand, if you make them too large or leave them unmanaged, they can cause dropped packets or produce poor output performance.
Buffers are designated blocks of memory that meet the needs of temporary storage of packets. There are two types. The first type is used to smooth the flow of packets from component to component. For example, if your encoder has an IP packet that is ready to be sent into the network but the network is not yet available, the packet will be placed in a buffer that is part of the interface controller which is called the driver. When the network becomes available, the packet will be sent. On the network, buffers are used at each input port and each output port of every switch and router. These allow for the switch of the router to process the packet and make its decision about how to relay the packet. Generally, the higher the speed of the attached links, the bigger the buffers. The buffers can be 64 Mbytes or larger and are able to hold thousands of data packets.
The second type of buffer smooths reception of media at the receiver. Called jitter buffers, they are designed to assure that the audio or video can be extracted from the packets and the digital stream can be played at the desired bit rate. Jitter buffers also allow for the receiving software to determine if any packets were dropped. In some cases with audio, the previous packets are simply replayed. The human ear can be fooled and not able to detect the dropped sound.
Network buffers other than jitter buffers need to be managed to provide peak performance. If they aren’t managed, they operate in FIFO (first in, first out) mode. If this type of buffer empties, it doesn’t present a problem. On the contrary, if it overfills, one or more packets will be dropped. This is most likely to occur when the network is congested. Sometimes buffers are managed by subdividing them so that they can hold different types of traffic. For example, voice or RTP video might be placed in separate subdivisions. This allows the control software to send packets from these subdivisions more frequently than from other subdivisions. The policy to determine the sending order and from which subdivisions from which they are selected is generally based on codes stored in the IP header. Network engineers refer to this method as a quality of service (QOS) technique.
Recent research by Cisco, Microsoft, Comcast, and others have revealed a problem when buffers in the network are unnecessarily large. Called bufferbloat, this effect can cause deteriorated performance of video, audio and surprisingly, standard file transfers. While detailed discussion of how this happens is beyond the scope of this newsletter, some insight might be gained by considering this example. Suppose you are viewing streamed video that is to play out at 4 Mb/sec. Also, suppose you are connected to the Internet with a DSL connection that provides 20 Mb/sec downstream and 2 Mb/sec upstream. If your device or another user on the same connection begins an upload to YouTube or Dropbox, your video will likely degrade. This seems counterintuitive because the file transfer is on the opposite direction that your video is being received. However, your video server sets its transfer rate by observing the rate of acknowledgements. These acknowledgement packets are being carried on the upstream. Their delivery will likely slow when the upward file transfer begins and the buffers in the upward direction fill to capacity.
If you are involved in delivering ABR video, bufferbloat is a topic you should research. It will help you understand this deteriorating effect and lead you toward methods to mitigate it.