Before the discovery of Viterbi algorithm, other algorithms had been proposed for decoding of convolutional codes. Sequential decoding algorithm was the earliest and it was proposed by Wozencraft and modified by Fano. The principle of sequential decoding can be explained as under :
The sequential decoder generates hypotheses about the transmitted codeword sequence. It then computes a metric between these hypotheses and the received signal. The sequential decoder will go forward as long as the computed metric indicates that the choices made by decoder are likely. But, if the computed metric indicates that the choices made by decoder are unlikely, then the decoder will go backward and changes the hypotheses. The process of changing the hypotheses continues by trial and error until the decoder finds a likely hypotheses.
Implementation of Sequential Decoders
It is possible to implement a sequential decoder with hard or soft decisions. However, practically, the soft decisions are avoided because they increase the amount of storage required and the complexity of computation to a great extent.
At the decoder, a replica of the encoder code tree has been stored. The decoder follows that path on the tree which agrees with the received n code symbols. At every level in the code tree, the decoder generates both the possible paths and follows the path which agrees with the second group of n code symbols. Following this procedure, the decoder quickly penetrates the tree. But, if there is error in the received code word, then the decoder follows the most likely path. If two branches are equally likely, the receiver chooses one of them by using an arbitrary rule. At each new level in tree the decoder generates new branches and compares them with next set of n received symbols. This search continues to penetrate the tree along the most likely path and keeps the cumulative disagreement count. If the disagreement count exceeds a particular number, than the decoder understands that it is following a wrong path. So, it blocks that path and tries other.
(i) The sequential decoding is thus a trial and error technique used for searching out the correct path on the code tree.
(it) The decoder searches the correct path in a sequential manner. It always operates on a single path at a time.
(iii) This is similar to an automatic traveller following a road map.
Limitations of Sequential Decoding
(i) The number of state metrics searched is a random variable.
(ii) The expected number of poor hypotheses and backward searches is a function of channel SNR. With reduction in SNR, the number of poor hypotheses increases.
(iii) If the average symbol arrival rate is higher than the average symbol decode rate, the input buffer overflows and the errors are introduced due to loss of data.
- 18.1. Stop and Wait ARQ System
The block diagram for the stop and wait ARQ system is as shown in figure 10.89. This method is the simplest ARQ system.
FIGURE 10.89 Stop and wait ARQ system.
In figure 10.89, X1, X2, … etc. are codewords. The transmitter sends the first codeword X1 during time Tw. This codeword reaches the receiver after a delay time t2 which is proportional to the distance between the transmitter and receiver. At the receiver, the detector searches for error. As error is not found, it sends the positive acknowledgement signal ACK back to the transmitter. After receiving this ACK signal, the transmitter will send the next codeword X2. The receiver detects the error and sends a negative acknowledgment signal NAK to the transmitter. On reception of this NAK signal, the transmitter will retransmit the codeword X2.
The major disadvantages of this system is that the transmitter has to wait for the ACK and NAK signals. This wastes a lot of transmitter time.
10.18.2. Go Back N ARQ
The block diagram for the go back n ARQ system is as shown in figure 10.90. The major difference between this and the previous system is that the transmitter does not wait for ACK signal for the transmission of next codeword. It transmits the codewords continuously as long as it does not receive the NAK signal.
FIGURE 10.90 Go Each-N ARQ system.
When the receiver detects an error in the third codeword X3 as shown in figure 10.90, the receiver sends the NAK signal. But, this signal takes some time to reach the transmitter. By that time, the transmitter has transmitted codewords upto X7. On reception of the NAK signal, the transmitter will retransmit all the codewords from X3 onwards. The receiver discards all the codewords it ha received after X3, i.e., X3 to X7. It will then receive all the codewords that are retransmitted by the transmitter.
10.18.3. Selective Repeat ARQ System
The block diagram of selective repeat ARQ system is shown in figure 10.91. In this system as well, the transmitter does not wait for the ACK signal for the transmission of the next codeword. It transmits the codewords continuously till it receives the NAK signal from the receiver. The receiver sends the NAK signal back to the transmitter as soon as it detects an error in the received codeword. For example the receiver detects an error in the third codeword X3. By the time this NAK signal reaches the transmitter, it had transmitted the codewords upto X7 as shown in figure 10.91. On reception of NAK signal, the transmitter will retransmit only the codeword X3 and then continues with the sequence X8, X9 … as shown in figure 10.91. The codewords X4, X5, X6 and X7 received by the receiver are not discarded by the receiver. The receiver receives the retransmitted codeword in between the regular codewords. Therefore, the receiver will have to maintain the codewords sequentially.
FIGURE 10.91 Selective repeat ARQ system.
Hence, the selective repeat ARQ system is the most efficient but the most complex system of all the ARQ systems.
■ The transmitted signal passes through some noisy channel. Due to noise interference, some errors are introduced in the received data. These errors can be detected and sometimes corrected using coding techniques. Generally the error control methods are of two types:
(i) Error detection with retransmission, and
(ii) Forward acting error correction.
■ A channel encoder adds extra bits to the message signal. On the other hand, the channel decoder identifies these redundant bits and uses them to detect and correct the errors in the message bits if there is any.
■ The encoded block of ‘n’ bits is known as a codeword. It consists of message bits and redundant bits.
■ The number of bits ‘n’ after coding is known as the block length of the code.
■ The ratio of message bits (k) and the encoder output bits (n) is known as code rate. Usually code rate is denoted by ‘r’ i.e.,
We find that 0 < r < 1.
■ Channel data rate is the bit rate at the output of encoder. If the bit rate at the input of encoder is RS, then channel data rate wold be,
Channel data rate (R0) = RS
■ An ‘n’ bit code word can be visualized in an n-dimensional space as a vector whose elements i or coordinates are the bits in the code word. Let us visualize the 3-bit code words as it is quite simpler.
■ The Hamming distance between the two code vectors is equal to the number of elements in which they differ. As an example let X= 101 and Y= 110. Since the two code vectors differ in second and third bits therefore Hamming distance between X and Y is ‘two’. Hamming distance is denoted by d(X,Y) or simply `d’. i.e,
d(X,Y) = d = 2.
■ Minimum distance (dmin) is the smallest Hamming distance between the valid code vectors.
■ It may be noted that the error detection is possible if the received vector is not equal to some other code vector. This shows that the transmission errors in the received code vector must be less than minimum distance dmin. The Table 16.2 shows some of the requirements of error control capability of the code.
■ The code efficiency is expressed as the ratio of message bits in a block to the transmitted bits for that particular block by the encode i.e.,
Code efficiency =
■ The number of non-zero elements in the transmitted code vector is known as vector weight.
■ Parity may be of two types:
(i) Even Parity: The parity of a binary word is known as ‘even’ if it contains even number of l’s. For example 01101001 has even parity since there are four number of l’s in the word.
(ii) Even Parity: The parity of a binary word is known as ‘odd’ if it contains odd number of l’s. For example 10010001 has odd parity because there are three number of 1s in this word.
■ Advantages of LRC over VRC: Now let us summarise advantages of LRC method over VRC method
(i) In LRC method, single errors can be detected and corrected.
(ii) In LRC method, double and tripple errors can be detected.
The drawback of LRC is that complexity in the method is increased.