Network errors can occur at many levels during transmission of data and files. They can happen at the physical level due to interference, reflections, defective transmitters, faulty receivers, or other factors. IP packets can be dropped if they are corrupted, incorrectly processed by switching devices, or contain invalid fields. However, the most common reason for packet loss is the overfilling of network buffers. In this newsletter, we will introduce you to many forms of errors, the methods used to mitigate them, and the impact they have on applications.
If you read basic data communications texts, most will describe a digital signal as a pattern of highs and lows that represent ones and zeros. Actually this is rarely the case. Most transmitted digital signals are complex waveforms that represent 2, 4, or 8 bits in each pattern. These patterns can be more difficult to detect correctly detect than if they were simple highs and lows. For example, a sender may send a waveform that represents 1101. If the receiver interprets the signal to be 0011, a three-bit error has occurred.
Physical level bit errors are measured by sending a stream of bits and determining how many are correctly received. The test is usually called a BER test (for bit error rate). Technicians often are redundant and refer to a BERT test. The result is usually a negative power of ten such as 10-6 or 10-8. The result 10-6 would be interpreted as 1 errored bit per 1,000,000 bits sent. Test devices that do a BER test have connections for each major cable type: coax, twisted pair, fiber and so forth. These tests are usually run on a single link in the network.
On the other hand, packet loss is measured as a percent. Unlike the BER test, network engineers often measure packet loss at layer three, the IP level. They can send a series of packets with a test device or use software that reads the loss level from protocols such as RTCP (Real-time Control Protocol). RTCP is routinely used with VoIP and many forms of video.
The relationship between BER results and packet loss results is not straight forward. This is because packet lengths vary, generally from about 64 bytes to 1500 bytes. For example, a BER of 10-6 (one bit in 1,000,000 errored) will cause a packet loss rate of 0.1% with 125 byte packets. Each packet contains 1,000 bits. On the other hand, typical file transfer packets containing 1250 bytes will have an average loss of about 1%. A loss rate of 0.1% would not seriously impact most VoIP conversations. However, a loss rate of 1% would seriously deteriorate a digital signage flow.
Physical level errors are sometimes mitigated by using FEC (forward error correction). While the mathematics behind this method can be quite complex, the basic idea can be depicted by using an example. Suppose I have 64 bits (eight bytes) to send. First I determine the parity of each byte. That is, I determine whether there are an even or odd number of ones in the byte. I build a code that contains a one for each byte with odd parity and a zero for each byte that contains even parity. Now I repeat the process of parity determination using the 1st, 9th, 17th, 25th, etc. bits. I follow this by using the 2nd, 10th, 18th, etc. I continue this process starting with the 3rd bit, the 4th bits, and so forth until I’ve rechecked parity on all 64 bits. This process, one called the vertical redundancy check, allows me to develop a code of 16 parity bits. I append the message with this code and send it. The receiver repeats the parity check technique and can detect and correct all single bit errors. While I’ve added 2 bytes to an eight byte message, a 25% increase in overhead, I have a technique that will allow detection of 99% of all errors. It is important to realize that modern FEC techniques are much more complex and more efficient, but have lower overheard. However, in all of these techniques, the principle is the same: add a small level of additional bits as an error code and you make it possible for the receiver to fix any errors in transmission.