Your browser is out-of-date!

Update your browser to view this website correctly. Update my browser now

×

Phil Hippensteel on the Pros and Cons of HTTP

Hypertext Transfer Protocol (HTTP) has been the foundation of web browsing for more than thirty years. However, recently, it has become a preferred protocol for moving audio and video across the enterprise network and the Internet. Because it seems to be gaining popularity in the AV industry, we’re going to investigate it.

In its original use, HTTP used TCP (Transmission Control Protocol) and carried a request to deliver web pages. A client of a web server would open a TCP connection with the web server’s port 80 and send a command called a Get. The server would respond from port 80 to the client with a code 200 called an OK. Then, the file or files that create the web page would be delivered. At that point, the client could present the page to the user. The connection would then be closed.

So, why did the video industry embrace a protocol that seemed to be designed for quick request-response actions? There were probably two decisive reasons. First, stream flows to and from the HTTP port, port 80, usually proceed through firewalls without hindrance. When video conferencing first began using IP, it was based on UDP (User Datagram Protocol) rather than TCP. Firewalls weren’t very supportive to UDP flows because they were used by hackers in the early days of the Internet. To get the UDP video for the conference through the firewall, separate protocols had to be developed that authenticated the flows. Not all firewalls could be configured to accept these protocols, so network engineers had to resort to opening ports on the firewall. They were uneasy about of doing that. For this reason, developers began focusing on whether HTTP using port 80 could carry video.

The first popular version of HTTP was version 1.0. It was used during the early expansion of the Internet when the Netscape browser was introduced.   One of the bad features of this version was that when a web page consisted of different objects from different sources, each resource file required a new TCP session to issue the GET command.   Often, the overhead time of the session was five or ten times as long as the time it actually took to retrieve the appropriate file. Also, version 1.0 required that GET commands be fulfilled in sequential order. Web pages quickly evolved and began to consist of five, ten, or even more objects and often were stored on separate servers. It became clear that a new version of HTTP was needed.

Currently, version 1.1 eliminates both of these issues. Through the methods called persistent connections and pipelining, the client can establish multiple connections and can use any of them to issue multiple gets. As an example, a study of almost any streaming server operation will show both of these techniques being used.

All of this doesn’t mean that ABR video based on HTTP and TCP have become highly efficient delivery mechanisms. HTTP using TCP continues to have problems with the way the Internet and enterprise IP networks work. For example, let’s say a typical ABR video flow has five CDN points where the video may reside and also needs the user and user’s account to be authenticated. This might require six or seven TCP connections. Each of these will typically be preceded by a DNS query to get the appropriate addresses. Current research shows that if the network is busy, the DNS requests can take several seconds or tens of seconds to be fulfilled. Add the need for security of the authentications and you might be facing a minute or more before a video flow can begin.

So, our conclusion must be this: HTTP isn’t a highly efficient protocol. But nevertheless, for now it may be the only adequate choice for delivering some forms of non-live video over enterprise and Internet networks.

Featured Articles

Close