what are the Viterbi algorithm , Feedback , Sequential decoding , DECODING METHODS OF CONVOLUTIONAL CODES convolutional code decoding .
DECODING METHODS OF CONVOLUTIONAL CODES
There are three methods for decoding the convolutional codes. They are as under :
(i) Viterbi algorithm
(ii) Feedback decoding
(iii) Sequential decoding
10.17.1. Viterbi Algorithm
The viterbi algorithm operates on the principle of maximum likelihood decoding and achieves optimum performance. The maximum likelihood decoder has to examine the entire received sequence Y and find a valid path which has the smallest Hamming distance from Y. But there are 2^{N} possible paths for a message sequence of N bits. These are a large number of paths. The viterbi algorithm applies the maximum likelihood principle to limit the comparison of so many surviving paths, to make the maximum likelihood decoding possible. Before we explain the viterbi algorithm for the decoding of convolutional codes, it is necessary to define certain important terms.
Metric
It is defined as the Hamming distance of each branch of each surviving path from the corresponding branch of Y (received signal). The metric is defined by assuming that 0’s and l’s have the same transmission error probability.
Surviving Path
The surviving path is defined as the path of the decoded signal with minimum metric.
(i) Let the received signal be represented by Y. The viterbi decoder assigns to each branch of each surviving path a metric.
(ii) By summing the branch metrices we get the path metric. To understand more about viterbi algorithm, let us solve the example given below :
EXAMPLE 10.42. Given the convolutional encoder of figure 10.48 and for a received signal Y= 11 01 11. Show the first three branches of the valid paths emerging from the initial node a_{0} in the code trellis.
Solution : The received or input signal Y= 11 01 11.
Let us consider the code trellis diagram of figure 10.53 for the (2, 1, 2) encoder. It shows that for the current state a, the next state can be either a or b depending on the message bit m_{0} = 0 or 1. We have redrawn these two branches in figure 10.56 where a_{0} represents the initial state and a_{l} and b_{1} represent the next possible states.
The solid line represents the branch for m_{0} = 0 and the dotted line represents the branch for m_{0} = 1
diagram
FIGURE 10.56 First step in Viterbi’s algorithm
In figure 10.56, the number in the brackets, written below the branches represent the branch metric which is obtained by taking the difference between the encoded signal and corresponding received signal Y. For example for the branch a_{0}, to a_{1}, encoded signal is 00 and received signal is 11, hence, the branch metric is (2), whereas for the path a_{0}, b_{1}, the encoded signal and received signal both are 11, hence, the branch metric is (0).
The encircled numbers at the right hand end of each branch represents the running path metric which is obtained by summing the branch metrices from a_{0}. For example, the running path metric for the branch a_{0}, a_{1} = 2 and that for the branch a_{0}, b_{1} is 0.
DIAGRAM
FIGURE 10.57 Second step in viterbi algorithm.
When the next part of input bits i.e., Y = 01 are received at the nodes a_{1} and b_{1}, then four possible branches emerge from these two nodes as shown in figure 10.57. The next four possible states are a_{2}, b_{2}, c_{2} and d_{2}.
The numbers in the brackets written below each branch represent the branch metric. For example, for the branch a_{1} a_{2}, the branch metric is (1) which is obtained by taking the difference between the encoded signal 00 and received signal 01. The running path metric for the same branch is 3 which is obtained by adding the branch metrics from a_{0} [(2) + (1) = 3 from a_{0}, to a_{1} and a_{1} to a_{2}].
Similarly, the path metric for the path a_{0}, a_{1} b_{2} is 3, that of the path a_{0}, b_{1} d_{2} is 0 and so on.
diagram
FIGURE 10.58 Paths and their path metrices for the viterbi algorithm.
Choosing the Paths of Smaller Metric
- There are different paths shown in figure 10.58. We must carefully see the path metric for each path. For example, the path a_{0} a_{1} a_{2} a_{3} has the path metric equal to 5. The other paths and path metric are as listed in the table 10.18.
Table 10.18.
S. No. | Path | Path metric | Decision |
1 | a_{0} a_{1} a_{2} a_{3} | 5 | x |
2 | a_{0} a_{1} a_{2} b_{3 }(survivor) | 3 | ✓ |
3 | a_{0} a_{1} b_{2} c_{3} | 4 | x |
4 | a_{0} a_{1} b_{2} d_{3} | 4 | x |
5 | a_{0} b_{1} c_{2} a_{3 }(survivor) | 2 | ✓ |
6 | a_{0} a_{1} c_{2} b_{3} | 4 | x |
7 | a_{0} b_{1} d_{2} c_{3 }(survivor) | 1 | ✓ |
8 | a_{0} b_{1} d_{2} d_{3 }(survivor) | 1 | ✓ |
Now, we observe another path a_{0} a_{l} a_{2} b_{3} arriving at node b_{3} and has a metric 3. Hence regardless of what happens subsequently, this path will have a smaller Hamming Distance from Y than the other paths arriving at b_{3}. Thus, this path is more likely to represent the actual transmitted sequence. Therefore, we discard the large metric paths arriving at nodes a_{3} c_{3} and d_{3}, leaving the total 2^{kL} = 4 surviving paths marked with (d) sign in Table 10.18.
The paths marked by a (x) in Table 10.18 and figure 10.58 are large metric paths and hence are discarded and the paths with smaller metric paths are declared as survivor at that node. Note that there is one survivor for each node (see Table 10.18).
Thus, the surviving paths are : a_{0} a_{1} a_{2} b_{3}, a_{0} b_{1} c_{2} a_{3}, a_{0} b_{1} d_{2} c_{3} and a_{0} b_{1} d_{2} d_{3}. None of the surviving path metrics is equal to zero. This shows the presence of detectable errors in the received signal Y.
diagram
FIGURE 10.59 Illustration of viterbi algorithm for maximum likelihood decoding.
Figure 10.59 depicts the continuation of figure 10.58 for a complete message of N = 12 bits. All the discarded branches and all labels except for running path metrics have been omitted for the sake of simplicity. The letter T under a node shows that the two arriving paths had equal running metrics. Under such circumstances the choice of survivor is arbitrary. The maximum likelihood path follows the solid line from a_{0} to a_{12}, as shown in figure 10.59. The final value of path metric is 2 which shows that there are atleast two transmission errors present in Y.
The decoder assumes the corresponding transmitted sequence Y + E and the message sequence M has been written below the trellis.
From figure 10.59, we observe that at node a_{12} only one path has arrived with metric 2. This path is shown by a dark line. Note that since this path has the lowest metric, it is the surviving path and signal Y is decoded from this path. Whenever this path is a solid line, the message is 0 and when it is a dotted line, the message is 1. This has been shown in figure 10.59.
A viterbi decoder must calculate two metrics for each node (branch metric and path metric) and store 2^{k1} survivingpaths, each consisting of N branches. Hence, the decoding complexity goes on increasing exponentially with L and linearly with N. Therefore, the viterbi algorithm is used only for codes with small values of L.
10.17.2. Metric Diversion Effect
For a large number of message bits to be decoded, the storage requirement is going to be extremely large. This problem can be avoided by the metric diversion effect. The metric diversion effect is used for reducing the required memory storage.
10.17.3. Free Distance and Coding Gain
The error detection and correction capability of the block and cyclic codes is dependent on the minimum distance, d_{min} between the code vectors. But, in case of convolutional code the entire transmitted sequence is to be considered as a single code vector. Therefore, the free distance (d_{free}) is defined as the minimum distance between the code vectors. But, the minimum distance between the code vectors is same as the minimum weight of the code vector. Hence, the free distance is equal to minimum weight of the code vector.
Therefore,
Free distance d_{free} = Minimum distance
= Minimum weight of code vectors
If X represents the transmitted signal, then the free distance is given by,
d_{free} = [W(X)]_{min} …(10.56)
where [W(X)]_{min} = Minimum weight of the code vector.
The way minimum distance decides the capacity of the block or cyclic codes to detect and correct errors, the free distance will decide the error control capacity for the convolutional code.
Coding gain (A)
The coding gain (A) is defied as the ratio of (E_{b}/N_{0}) of an encoded signal to (E_{b}/N_{0}) of a coded signal. The coding gain is used for comparing the different coding techniques.
Coding gain (convolutional encoder) = …(10.57)
where r = Code rate, and, d_{free} = The free distance.
10.17.4. Transfer Function of the Convolutional Codes
The distance properties and the error rate performance of a convolutional code can be obtained from its state diagram. The state diagram shown in figure 10.60 will be used to demonstrate the method of obtaining the distance properties of a convolutional code.
First, let us table the branches of the state diagram as either P^{0} = 1 or p^{1}, p^{2}, p^{3} etc. Here, p represents the Hamming distance between the sequence of output bits corresponding to each branch and the sequence of output bits corresponding to the all zero branch. This is because we assume that the input to the encoder is an all zero sequence.
diagram
FIGURE 10.60 State diagram for rate 1/3, k = 3 convolutional code.
The self loop at node a can be eliminated because it does not contribute to the distance properties of a code sequence relative to all zero code sequence. Node a is split into two nodes (a and e in figure 10.61). One of these nodes (a) represents the input of the state diagram and the other one (e) represents the output of the state diagram of figure 10.61. After implementing all these modifications, we get the state diagram shown in figure 10.61.
diagram
FIGURE 10.61 State diagram for the rate 1/3, k = 3 convolutional code with distance labels on the branches.
Carefully observe the distance labels marked on various branches in figure 10.61. For example, for the branch from a to c the output is 111 (figure 10.60). Hence in figure 10.61, we have written a distance of p^{3} on this branch (ac). On the similar lines, the output for the branch b to a in figure 10.60 is 011. Hence, it has been shown as branch (be) in figure 10.61 and the corresponding distance labels is p^{2}. Similarly, the other distance labels have been marked in figure 10.61.
The state diagram of figure 10.61 has five nodes (a to e) and the four state equations can be written as under :
X_{c} = p_{3} X_{a} + p X_{b}
X_{b} = p X_{c} + p X_{d}
X_{d} = p^{2} X_{c} + p^{2} X_{d}
X_{e} = p^{2} X_{b} …(10.58)
Here, it may be noted that these state equations have been written by considering the incident branches on each node. For example, for node c, the incident branches as ac and bc with distance labels of p^{3} and p respectively. Therefore, the state equation is
X_{c} = p^{3} X_{a} + p X_{b} …(10.59)
The transfer function of the code is defined as,
Transfer function T(p) =
By solving the state equations given by equation (10.68), we obtain
T(p) = …(10.60)
or T(p) = p^{6} + 2p^{8} + 4p^{10} + 8p^{12} +… …(10.61)
Observations
(i) Transfer function of equation (10.61) tells us that the first term p^{6} represents a single path of Hamming distance p = 6. From figure 10.61 it is clear that the p = 6 path is acbe.
(ii) The second term of equation (10.61) indicates that there are two paths of p = 8 from node a to node e. These paths are acdbe and acbcbe.
(iii) The third term in equation (10.61) indicates that there are four paths of distance p = 10.
(iv) The minimum distance of the code is called as the minimum free distance and is denoted by d_{free}. In this example, d_{free} = 6.
Thus, the transfer function gives us the distance properties of the convolutional code.
EXAMPLE 10.43. For the convolutional encoder arrangement shown in figure 10.62, draw the state diagram and hence trellis diagram. Determine output digit sequence for the data digits 11010100. What are the dimensions of the code (n, k) and constraint length ? Use viterbi’s algorithm to decode the following sequence :
100 110 111 101 001 101 001 010
Compare the convenience of code tree, trellis diagram and state diagram from encoding and decoding of convolutional codes point of view.
Solution : Code Dimension
The code dimension is represented by (n, k, L) whet e
n = Number of encoded bits per message
k = Number of message bits taken at a time by the encoder
L = Encoder’s memory.
Here, n = 3, k = 1 and L = 2. Therefore, code dimension is given by (3, 1, 2) but if the code dimension is to be expressed as (n, k), then it is (3, 1)
diagram
FIGURE 10.62
Therefore, code dimension (n, k) = (3, 1)
Constraint Length
The constraint length may be defined in following two different ways :
(i) Constraint length K = n(L + 1) = 3(2 + 1) = 9
(ii) The other definition states that constraint length is equal to the number of shifts over which a single message bit can influence the encoder output. Applying this definition, the constraint length, K = 3.
Code trellis for the encoder of figure 10.62.
The code trellis for the encoder of figure 10.62 is shown in figure 10.63. The code trellis of figure 10.63 are obtained as under :
(i) If the current state S_{3} S_{2} = 00, i.e., a and if S_{1}= 0 then after shifting all the contents through one bit, the next state will be S_{3} S_{2} = 00 i.e., a this is shown by a solid line. But, if S_{3}, S_{2} = 00 = a is the initial state and if S_{1} = 1 then after shift, the state will be S_{3} S_{2} = 01 i.e., b, hence, this is shown by the dotted line emerging from a. This is how the diagram is completed by obtaining the next states-for each initial state.
diagram
FIGURE 10.63 Code trellis for the encoder in figure 10.62.
The state diagram
The state diagram obtained from the code trellis is shown in figure 10.64.
diagram
FIGURE 10.64 State diagram of the encoder in figure 10.62.
The Output Sequence
(i) To write the generator polynomials, it is necessary to first write the generating sequences for and . According to figure 10.62, the generating sequence for is denoted by and is given by,
= [1, 0, 0] …(i)
This is because only S_{1} is used. Hence, 1 represents S_{1} and the two 0s represent S_{2} and S_{3}.
Similarly, the generating sequence for is and is given by
= [1, 0, 1] …(ii)
This is because, to obtain we have used S_{1} and S_{3}. So, the two 1s represent S_{1} and S_{3} whereas the 0 represents the unused S_{2}.
Similarly, the generating sequence for is and because S_{1} and S_{2} are involved in generation of , the generator sequence is given by
= [1, 1, 0] …(iii)
The corresponding generator polynomials are given by :
G^{(1)}(p) = g_{0}^{(1)} + g_{1}^{(1)}p + g_{2}^{(1)} p^{2}
Therefore, G^{(1)}(p) = 1 + 0p + 0p^{2} = 1 [Substituting g_{0}^{(1)} = 1 and g_{1}^{(1)} = g_{2}^{(1)} = 0]
G^{(2)}(p) = g_{0}^{(2)} + g_{1}^{(2)}p + g_{2}^{(2)} p_{2}
or G^{(2)}(p) = 1 + 0p + 1p^{2} = 1 + p^{2}
and G^{(3)}(p) = g_{0}^{(3)} + g_{1}^{(3)}p + g_{2}^{(3)} p_{2}
or G^{(3)}(p) = 1 + 1p + 0p^{2}
or G^{(3)}(p) = 1 + p
Thus, the generator polynomials are given by :
G^{(1)}(p) = 1
G^{(2)}(p) = 1 + p^{2 }…(iv)
G^{(3)}(p) = 1 + p
The message signal is given by m = [1 1 0 1 0 1 0 0]
Hence, the corresponding message polynomial is given by :
M(p) = 1 + 1p + 0p^{2} + 1p^{3} + 0p^{4} + 1p^{5} + 0p^{6} + 0p^{7}
M(p) = 1 + p + p^{3} + p^{5 }…(v)
(iii) The codeword polynomials for the three outputs can be obtained by, multiplying the corresponding generator polynomial with the message polynomial. Hence, the three codeword polynomials are given by
X(1) (p) = G(1) (p) . M (p)
= 1 . [1 + p + p^{3} + p^{5}]
Therefore, we have
X^{(1)} (p) = 1 + p + p^{3} + p^{5 }…(vi)
(vi) Hence, the corresponding codeword sequence at the first output is :
x_{i}^{(1)} = (1, 1, 0, 1, 0, 1) …(vii)
Similarly, the codeword polynomial for the second output is given by
X^{(2)} (p) = G^{(2)} (p) . M (p)
X^{(2)} (p) = (1 + P^{2}) (1 + p + p^{3} + p^{5})
X^{(2)} (p) = 1 + p + p^{3} + p^{5} + p^{2} + p^{5} + p^{7}
Due to modulo-2 addition, we have
X^{(2)}(p) = 1 + p + p^{2} + p^{7} … (viii)
The corresponding codeword sequence at this output is given by
x_{i}^{(2)} = (1, 1, 1, 0, 0, 0, 0, 1) …(ix)
And the codeword polynomial for the third output is given by
X^{(3)}(p) = G^{(3)} (p) . M (p)
or X(3) (p) = (1 + P) (1 + p + p^{3} + p_{5})
or X^{(3)}(p) = 1 + p + p^{3} + p^{5} + p + p^{2} + p^{4} + p^{6}
⸫ X^{(3)}(p) = 1 +p^{2} + p^{3} + p^{4} + p^{5} + p^{6} …(x)
This corresponding codeword sequence at this output is given by
x_{i}^{(3)} = (1, 0, 1, 1, 1, 1, 1, 0) …(xi)
The code sequences x_{i}^{(1)}, x_{i}^{(2)} and x_{i}^{(3)} are obtained from equations (vii), (ix) and (xi) by appending zeros to make all the sequences 8-bit long. Hence, the code sequences are given by,
x_{i}^{(1)} = [1 1 0 1 0 1 0 0]
x_{j}^{(2)} = [1 1 1 0 0 0 0 1]
x_{i}^{(3)} = [1 0 1 1 1 1 1 0]
(iv) The final codeword sequence may be obtained multiplexing these sequences as under :
{x_{i}} = [111 110 011 101 001 101 001 010]
This is the required sequence.
Viterbi algorithm for decoding the given sequence
Figure 10.65 shows the diagram based on the viterbi decoding. The received signal Y is written at the top. The second (Y + E) sequence alongwith the message sequence is also shown at the bottom.
The maximum likelihood path with minimum running metric (3) is shown by dark line. The decoded message is given by,
m = (1 1 0 1 0 1 0 0)
The signal in each interval is obtained from the nature of the maximum likelihood path. When the path is solid, the message is 0 and when the path is dotted, the message is 1.
diagram
figure 10.65 Viterbi decoding for example 10.61.
EXAMPLE 10.44. Determine the state diagram for the convolutional encoder shown in figure 10.66. Draw the Trellis diagram through the first set of steady state transitions. On the second Trellis diagram, show the termination of the Trellis to all-zero state.
diagram
FIGURE 10.66
Solution :
diagram
FIGURE 10.67 Given encoder in redrawn form.
Let us assume that all the registers have been cleared so S_{1}, S_{2} and S_{3} contain zeros in the beginning.
x_{l} = x_{i}^{(1)} = S_{1} Å S_{2} Å S_{3} …(i)
x2 = x_{i}^{(2)} = S_{1} Å S_{3 }…(ii)
and the encoder output = x_{l} x_{2}.
If the initial state is S_{3} S_{2} = 00 and if S_{1} = 0, then, we have
Output x_{1} = 0 Å 0 Å 0 = 0
x_{2} = 0 Å 0 = 0
Therefore, x_{1} x_{2} = 00 … if S_{1}= 0
Next state S_{3} S_{2} = 00
But, if S_{1} = 1 then
x_{1} = 1 Å 0 Å 0 = 1
and x_{2} = 1 Å 0 = 1
Therefore, we have x_{1} x_{2} = 11 … if S_{1} = 1
Next state S_{3} S_{2} = 01
If we define the four different states as under :
00 = a, 01 = b, 10 = c and 11 = d
Then, the state diagram is will be as shown in figure 10.68.
diagram
FIGURE 10.68 State diagram of the given decoder.
The code trellis in the steady state transition will be as shown in figure 10.63.
diagram
FIGURE 10.69
EXAMPLE 10.45. The encoder shown in figure 10.70 generates an all zero sequence which is sent over a binary symmetric channel. The received sequence 01001000…. There are two errors in this sequence (at second and fifth position). Show that this double error detection is possible with correction by application of viterbi algorithm.
diagram
FIGURE 10.70
Solution : The trellis diagram for the encoder shown in figure10.70 is shown in figure 10.71.
diagram
FIGURE 10.71 The trellis diagram.
From these trellis diagram, we have drawn the viterbi decoding diagram of figure 10.72. The input signal Y= 01 00 10 00… .
diagram
FIGURE 10.72 Viterbi diagram.
From the viterbi diagram, let us write the possible paths for each state a_{4}, b_{4}, c_{4} and d_{4} and the running path metric for each path is as shown below :
State | Possible Paths | Running Path Metric | |
a_{4} | a_{0} – a_{1} – a_{2} – a_{3} – a_{4} | 2 | ✓ |
a_{0} – a_{1} – b_{2} – c_{3} – a_{4} | 5 | x | |
b_{4} | a_{0} – a_{1} – a_{2} – a_{3} – b_{4} | 4 | x |
a_{0} – b_{1} – c_{2} – a_{3} – b_{4} | 5 | x | |
a_{0} – b_{1} – d_{2} – c_{3} – b_{4} | 5 | x | |
a_{0} – a_{1} – b_{2} – c_{3} – b_{4} | 3 | ✓ | |
c_{4} | a_{0} – a_{1} – a_{2} – b_{3} – c_{4} | 3 | ✓ |
a_{0} – b_{1} – c_{2} – b_{3} – c_{4} | 4 | x | |
a_{0} – b_{1} – d_{2} – d_{3} – c_{4} | 3 | x | |
a_{0} – a_{1} – b_{2} – d_{3} – d_{4} | 6 | x | |
d_{4} | a_{0} – a_{1} – a_{2} – b_{3} – d_{4} | 3 | ✓ |
a_{0} – b_{1} – c_{2} – b_{3} – d_{4} | 4 | x | |
a_{0} – b_{1} – d_{2} – d_{3} – d_{4} | 3 | x | |
a_{0} – a_{1} – b_{2} – d_{3} – d_{4} | 6 | x |
Out of the possible paths listed above, we select four survivor paths having the minimum value of running path metric. The survivor paths are marked by (d) sign. They are as under :
S.No. | Paths | Path Metric |
1. 2. 3. 4. |
a_{4} → a_{0} – a_{1} – a_{2} – a_{3} – a_{4} b_{4} → a_{0} – a_{1} – b_{2} – c_{3} – b_{4} c_{4} → a_{0} – a_{1} – a_{2} – b_{3} – c_{4} d_{4} → a_{0} – a_{1} – a_{2} – b_{3} – d_{4} |
2 3 3 3 |
Out of these survivor paths, the path having minimum running path metric equal to 2, i.e., the path (a_{0} – a_{1} – a_{2} – a_{3} – a_{4}). Hence, the encoded signal corresponding to this path is given by
a_{4} d 00 00 00 00.
This is corresponding to the received signal 01 00 10 00.
diagram
This shows that Viterbi algorithm can correct the errors present in the received signal.
EXAMPLE 10.46. Figure 10.73 depicts a rate 1/2, constraint length N = 2, convolutional code encoder. Sketch the code-tree for the same.
Solution : The constraint length k = 2 and its rate is 1/2. This means that for a single input binary bit, two bits V_{1} and V_{2} are encoded at the output. S_{1} acts as input and S_{2} acts as the state. For S_{2}, there are two possible values as under :
S_{2} = 0 …State a
S_{2} = 1 …State b
diagram
FIGURE 10.73
We assume that S_{1} and S_{2} both are zero. From figure 10.73, we can write that,
V_{1} = S_{ }
V_{2} = S_{1} Å S_{2}
The state diagram
Before obtaining the code tree, we have to draw the state diagram as under :
(i) Let the initial contents S_{1} S_{2} = 00
Diagram
(ii) Now, we assume that a 0 is applied at the input. Then the next state is also 0 as shown in figure 10.74. The state remains the same, i.e., a and the encoder outputs are V_{1}= 0 and V_{2} = 0
diagram
FIGURE 10.74
(iii) Now, if the present state of the encoder is a and if the input is 1, then contents S_{1}, S_{2} = 10. With input 1 the next state of the encoder will be b and V_{1} = V_{2} = 1 as shown in figure 10.75.
diagram
FIGURE 10.75.
(iv) Similarly, we can obtain the outputs and states for other possible states as shown in figure 10.76 (a) and (b)
Taking the figures 10.74, 10.75 and 10.76 (a) and (b) into consideration, we can draw the state diagram as shown in figure 10.76 (c). The code tree is as shown in figure 10.76 (d). It has been drawn by referring to the state diagram in figure 10.76 (c).
diagram
FIGURE 10.76. (Contd…)
diagram
figure 10.76
EXAMPLE 10.47. For the convolutional encoder of figure 10.77, produce a state diagram, and from it produce the trellis diagram.
diagram
FIGURE 10.77 Given convolutional encoder.
Solution : Solve yourself.
EXAMPLE 10.48. For the input sequences from 0000 to 1111 and the circuit in figure 10.78 determine the output sequence.
diagram
FIGURE 10.78.
Solution :
(i) First, we assume that shift register contents are 000 initially.
(ii) Then, we assume that input message is 0001.
(iii) Next, we obtain the values of V_{1} and V_{2} for each input bit.
(iv) Lastly, we interleave the V_{1} and V_{2} bits to obtain the output sequence.
The solution has been shown in figure 10.79 (a) to 10.79(d).
(a) diagram
(b) diagram
(c) diagram
FIGURE 10.79 (Contd…)
(d) diagram
(e) diagram
FIGURE 10.79
EXAMPLE 10.49. For the given encoder shown in figure 10.80, obtain the convolutional code for the bit sequence 1 1 0 1 1 0 1 1 and decode it by constructing the corresponding code tree.
Solution : To obtain the convolutional code for the bit sequence 1 1 0 1 1 0 1 1, please go through the example 10.48.
The code tree for the given encoder has been shown in figure 10.81.
We follow the path showed by the thick arrows in the code tree of figure 10.81. The encircled numbers indicate the outputs V_{2} V_{1}. Table 10.19 shows the input bits, state of the encoder, next state and encoder output.
diagram
FIGURE 10.80.
Table 10.19.
S.No. | Input bit | Encoder State m_{2 }m_{1} |
Next State m_{2 } m_{1} |
Output V_{1} V_{2} |
||||
1. 2 3 4 5 6 7 8 |
1 1 1 1 1 1 |
0 1 1 1 1 0 |
0 1 1 1 1 1 |
a b d c b d c b |
0 1 1 1 1 1 |
1 1 1 1 1 1 |
b d c b d c b d |
1 1 0 1 0 1 0 0 0 1 0 1 0 0 0 1 |
diagram
FIGURE 10.81 Code tree for Example 10.49
Hence, the encoder output will be given by
Encoder output = V_{2} V_{1} V_{2} V_{1} V_{2} V_{1} …
Substituting the values of V_{2} V_{1} from table 10.19, we can write,
Encoder output = 10 00 10 10 00 10 10 11
EXAMPLE 10.50. Determine the code efficiency and code rate for the system given in example 10.49.
Solution : The code efficiency and code rate are one and the same.
Therefore, we write
Code rate = Code efficiency =
where k = Number of messages hits in a codeword.
n = Number of transmitted bits.
For the encoder given in the example 10.49, for every input message bit, two encoded bits V_{1} and V_{2} are transmitted.
Hence, we have
Code rate = Code efficiency = Ans.
EXAMPLE 10.51. A convolutional encoder has a single shift register with two stages, three modulo-2 address and an output multiplexer. The generator sequences of the encoder are as under :
g^{(1)} = (1, 0, 1), g^{(2)} = (1, 1, 0) and g^{(3)} = (1, 1, 1)
Draw the block diagram of the encoder.
Solution : The generator sequences of the convolutional encoder are given.
The convolutional encoder has been shown in figure 10.82.
diagram
FIGURE 10.82. Encoder
EXAMPLE 10.52. For the convolutional encoder shown in figure 10.83, sketch the code tree.
diagram
FIGURE 10.83.
Solution : The values of c_{1}, c_{2} and c_{3} are generated depending on the values of p_{1}, p_{2} and p_{3} as under :
c_{1} = p_{1 }Å p_{2} Å p_{3}
c_{2} = p_{1}
and c_{3} = p_{1} Å p_{2}
The encoder output will be obtained by interleaving the bits c_{1}, c_{2} and c_{3} for each message input.
Therefore, Encoder output = c_{1} c_{2} c_{3} c_{1} c_{2} c_{3} ….
Let p_{1} represent the message input and p_{2} p_{3} represent the state. The states are defined as under :
Encoder States
p3 | p2 | State |
0 1 1 |
0 1 1 |
a b c d |
Let the initial contents of the shift register be – p_{1} p_{2} p_{3} = 0 0 0 and the initial state is a. If the first message bit is 0 then the shift register contents are 000. Hence the encoder state is 00 and the encoder outputs are c_{1} c_{2} c_{3} = 0 0 0. This has been shown in figure 10.84.
diagram
FIGURE 10.84 Development of code tree.
By following this procedure, we can obtain the complete code tree as shown in figure 10.85.
diagram
FIGURE 10.85 Code tree.
EXAMPLE 10.52. For the convolutional encoder shown in figure 10.86, sketch the code tree.
diagram
FIGURE 10.86.
Solution : Solve yourself. This example is identical as example 10.42.
EXAMPLE 10.53. For the convolutional encoder shown in figure 10.87 determine the encoder output produced by the message sequence 1011110. Construct the code tree and show how it can be used for finding the encoder output.
diagram
FIGURE 10.87.
Solution : To find the encoder output, we follow the procedure given in example.
EXAMPLE 10.54. (a) The generator polynomial of a (7.4) Cyclic code is x^{3} + x + 1. Construct the generator matrix for a systematic cyclic code and find the codeword for the message (1101) using the general matrix.
(b) Verify by division method.
(c) A convolutional code is described by
g_{1} = [110], g_{2} = [101], g_{3} = [111]
(i) Draw the encoder corresponding to this code.
(ii) Draw the state transition diagram for this code.
(iii) Draw the trellis diagram for this code.
Solution : (a) Let us first construct the generator matrix.
We know that the i^{th} row of the generator matrix is given by
x^{(n-i)} Å R_{i}(x) = Q_{i}(x) G(x) ; where i = 1, 2, …, k …(i)
It is given that the cyclic code is systematic (7, 4) code. Therefore, n = 7, k = 4 and (n – k) = 3. Substituting these values into the above expression, we obtain
x^{(7 -i)} Å R_{i}(x) = Q_{i}(x) . (x3 + x + 1) ; where i = 1, 2, …,4
With i = 1, the above expression will be
x^{6} Å R_{i}(x) = Q_{i}(x) (x^{3} + x + 1) …(ii)
Let us obtain the value of Q_{i}(x). The quotient Q_{i}(x) can be obtained by dividing x^{(n – i)} by G(x). Therefore, to obtain Q_{i}(x) , let us divide x^{6} by (x^{3} + x + 1). The division takes place as under :
diagram
Here, the quotient polynomial is
Q_{i}(x) = x^{3} + x + i
and the remainder polynomial is
R_{i}(x) = x^{2} + 0x + 1
Substituting these values into equation (ii), we obtain
x^{6} Å R_{i}(x) = (x^{3} + x + 1) (x^{3} + x + 1)
= x^{6 }+ x^{4 }+ x^{3} + x^{4} + x^{2} + x + x^{3} + x + 1
= x^{6} + 0x^{5} + (1 Å 1) x^{4} + (1 Å 1) x^{3} + x^{2} + (1 Å 1) x + 1
= x^{6} + 0x^{5} + 0x^{4} + 0x^{3} + x^{2} + 0x + 1
Therefore, 1^{st} Row polynomial Þ x^{6} + 0x^{6} + 0x^{4} + 0x^{3} + x^{2} + 0x + 1
and 1^{st} Row Elements Þ 1 0 0 0 1 0 1
Using the same procedure, we can obtain the polynomials for the other rows of the generator matrix is under :
2^{nd} Row Polynomial Þ x^{5} + x^{2} + x + 1
3^{rd} Row Polynomial Þ x^{4} + x^{2} + x
4^{th} Row Polynomial Þ x^{3} + x + 1
These polynomials can be transformed into the generator matrix as under :
x^{6} x^{5} x^{4} x^{3} x^{2} x^{1} x^{0}
G =
This is the required generator matrix.
Now, let us obtain the parity check matrix [H].
The parity check matrix [H] is given by
H = [P^{T }: I_{3×3}]
The transpose matrix P^{T} is given by interchanging the rows and columns of the P matrix.
This means that
P^{T }=
Hence, the parity check matrix is given by,
H =
This is the required parity check matrix [H].
Also, the code vectors in the systematic from may be obtained as under:
X = MG
where M = Message matrix
G = Generator matrix
Therefore, we have X = [1 1 0 1]
or X = [1 1 0 1 : 0 0 1]
This is the required codeword.
(b) We know that the message polynomial M(x) is given by
M(x) = m_{3}x^{3 }+ m_{2}x^{2 }+ m_{1 }x + 1
or M(x) = x^{3} + x^{2} + 1
Now, we multiply M(x) by x^{(n-k)}.
Here, n – k = 3
Therefore, x^{3 }M(x) = x^{3} (x^{3} + x^{2} + 1) = x^{6 }+ x^{5} + x^{3}
Now, let us perform the division.
We divide x^{3} M(x) by G(x).
equation
The remainder polynomial C(x) = 1.
Now, the codeword polynomial can be given as under :
X(x) = x^{n-k} M(x) + C(x) = (x^{6} + x^{5} + 0x^{4} + x^{3} + 0x^{2} + 0x + 0) + 1
or X(x) = x^{6} + x^{5} + 0x^{4} + x^{3} + 0x^{2} + 0x + 1
Therefore, codeword X = (1 1 0 1 0 0 1).
This is same as the code vector obtained in part (a). Ans.
(c) We can draw the convolutional encoder as under :
Given that : g_{1} = [110]
g_{2} = [101]
g_{3} = [111]
The required encoder will be as shown in figure 10.88.
diagram
FIGURE 10.88 Convolutional encoder