DECODING METHODS OF CONVOLUTIONAL CODES , Viterbi algorithm , Feedback , Sequential decoding

By   April 16, 2020
सब्सक्राइब करे youtube चैनल

what are the Viterbi algorithm , Feedback , Sequential decoding , DECODING METHODS OF CONVOLUTIONAL CODES convolutional code decoding .
There are three methods for decoding the convolutional codes. They are as under :
(i)         Viterbi algorithm
(ii)        Feedback decoding
(iii)       Sequential decoding
10.17.1. Viterbi Algorithm
            The viterbi algorithm operates on the principle of maximum likelihood decoding and achieves optimum performance. The maximum likelihood decoder has to examine the entire received sequence Y and find a valid path which has the smallest Hamming distance from Y. But there are 2N possible paths for a message sequence of N bits. These are a large number of paths. The viterbi algorithm applies the maximum likelihood principle to limit the comparison of so many surviving paths, to make the maximum likelihood decoding possible. Before we explain the viterbi algorithm for the decoding of convolutional codes, it is necessary to define certain important terms.
            It is defined as the Hamming distance of each branch of each surviving path from the corresponding branch of Y (received signal). The metric is defined by assuming that 0’s and l’s have the same transmission error probability.
Surviving Path
            The surviving path is defined as the path of the decoded signal with minimum metric.
(i)         Let the received signal be represented by Y. The viterbi decoder assigns to each branch of each surviving path a metric.
(ii)        By summing the branch metrices we get the path metric. To understand more about viterbi algorithm, let us solve the example given below :
EXAMPLE 10.42. Given the convolutional encoder of figure 10.48 and for a received signal Y= 11 01 11. Show the first three branches of the valid paths emerging from the initial node a0 in the code trellis.
Solution : The received or input signal Y= 11 01 11.
Let us consider the code trellis diagram of figure 10.53 for the (2, 1, 2) encoder. It shows that for the current state a, the next state can be either a or b depending on the message bit m0 = 0 or 1. We have redrawn these two branches in figure 10.56 where a0 represents the initial state and al and b1 represent the next possible states.
The solid line represents the branch for m0 = 0 and the dotted line represents the branch for m0 = 1
FIGURE 10.56 First step in Viterbi’s algorithm
In figure 10.56, the number in the brackets, written below the branches represent the branch metric which is obtained by taking the difference between the encoded signal and corresponding received signal Y. For example for the branch a0, to a1, encoded signal is 00 and received signal is 11, hence, the branch metric is (2), whereas for the path a0, b1, the encoded signal and received signal both are 11, hence, the branch metric is (0).
The encircled numbers at the right hand end of each branch represents the running path metric which is obtained by summing the branch metrices from a0. For example, the running path metric for the branch a0, a1 = 2 and that for the branch a0, b1 is 0.
FIGURE 10.57 Second step in viterbi algorithm.
When the next part of input bits i.e., Y = 01 are received at the nodes a1 and b1, then four possible branches emerge from these two nodes as shown in figure 10.57. The next four possible states are a2, b2, c2 and d2.
The numbers in the brackets written below each branch represent the branch metric. For example, for the branch a1 a2, the branch metric is (1) which is obtained by taking the difference between the encoded signal 00 and received signal 01. The running path metric for the same branch is 3 which is obtained by adding the branch metrics from a0 [(2) + (1) = 3 from a0, to a1 and a1 to a2].
Similarly, the path metric for the path a0, a1 b2 is 3, that of the path a0, b1 d2 is 0 and so on.
FIGURE 10.58 Paths and their path metrices for the viterbi algorithm.
Choosing the Paths of Smaller Metric

  1. There are different paths shown in figure 10.58. We must carefully see the path metric for each path. For example, the path a0 a1 a2 a3 has the path metric equal to 5. The other paths and path metric are as listed in the table 10.18.

Table 10.18.

S. No. Path Path metric Decision
1             a0 a1 a2 a3 5 x
2             a0 a1 a2 b3 (survivor) 3
3             a0 a1 b2 c3 4 x
4             a0 a1 b2 d3 4 x
5             a0 b1 c2 a3 (survivor) 2
6             a0 a1 c2 b3 4 x
7             a0 b1 d2 c3 (survivor) 1
8             a0 b1 d2 d3 (survivor) 1

Now, we observe another path a0 al a2 b3 arriving at node b3 and has a metric 3. Hence regardless of what happens subsequently, this path will have a smaller Hamming Distance from Y than the other paths arriving at b3. Thus, this path is more likely to represent the actual transmitted sequence. Therefore, we discard the large metric paths arriving at nodes a3 c3 and d3, leaving the total 2kL = 4 surviving paths marked with (d) sign in Table 10.18.
The paths marked by a (x) in Table 10.18 and figure 10.58 are large metric paths and hence are discarded and the paths with smaller metric paths are declared as survivor at that node. Note that there is one survivor for each node (see Table 10.18).
Thus, the surviving paths are : a0 a1 a2 b3, a0 b1 c2 a3, a0 b1 d2 c3 and a0 b1 d2 d3. None of the surviving path metrics is equal to zero. This shows the presence of detectable errors in the received signal Y.
FIGURE 10.59 Illustration of viterbi algorithm for maximum likelihood decoding.
Figure 10.59 depicts the continuation of figure 10.58 for a complete message of N = 12 bits. All the discarded branches and all labels except for running path metrics have been omitted for the sake of simplicity. The letter T under a node shows that the two arriving paths had equal running metrics. Under such circumstances the choice of survivor is arbitrary. The maximum likelihood path follows the solid line from a0 to a12, as shown in figure 10.59. The final value of path metric is 2 which shows that there are atleast two transmission errors present in Y.
The decoder assumes the corresponding transmitted sequence Y + E and the message sequence M has been written below the trellis.
From figure 10.59, we observe that at node a12 only one path has arrived with metric 2. This path is shown by a dark line. Note that since this path has the lowest metric, it is the surviving path and signal Y is decoded from this path. Whenever this path is a solid line, the message is 0 and when it is a dotted line, the message is 1. This has been shown in figure 10.59.
A viterbi decoder must calculate two metrics for each node (branch metric and path metric) and store 2k1 survivingpaths, each consisting of N branches. Hence, the decoding complexity goes on increasing exponentially with L and linearly with N. Therefore, the viterbi algorithm is used only for codes with small values of L.
10.17.2. Metric Diversion Effect
For a large number of message bits to be decoded, the storage requirement is going to be extremely large. This problem can be avoided by the metric diversion effect. The metric diversion effect is used for reducing the required memory storage.
10.17.3. Free Distance and Coding Gain
            The error detection and correction capability of the block and cyclic codes is dependent on the minimum distance, dmin between the code vectors. But, in case of convolutional code the entire transmitted sequence is to be considered as a single code vector. Therefore, the free distance (dfree) is defined as the minimum distance between the code vectors. But, the minimum distance between the code vectors is same as the minimum weight of the code vector. Hence, the free distance is equal to minimum weight of the code vector.
Free distance dfree = Minimum distance
= Minimum weight of code vectors
If X represents the transmitted signal, then the free distance is given by,
dfree =  [W(X)]min                                                          …(10.56)
where [W(X)]min = Minimum weight of the code vector.
The way minimum distance decides the capacity of the block or cyclic codes to detect and correct errors, the free distance will decide the error control capacity for the convolutional code.
Coding gain (A)
The coding gain (A) is defied as the ratio of (Eb/N0) of an encoded signal to (Eb/N0) of a coded signal. The coding gain is used for comparing the different coding techniques.
Coding gain (convolutional encoder) =                           …(10.57)
where                          r = Code rate, and, dfree = The free distance.
10.17.4. Transfer Function of the Convolutional Codes
The distance properties and the error rate performance of a convolutional code can be obtained from its state diagram. The state diagram shown in figure 10.60 will be used to demonstrate the method of obtaining the distance properties of a convolutional code.
First, let us table the branches of the state diagram as either P0 = 1 or p1, p2, p3 etc. Here, p represents the Hamming distance between the sequence of output bits corresponding to each branch and the sequence of output bits corresponding to the all zero branch. This is because we assume that the input to the encoder is an all zero sequence.
FIGURE 10.60 State diagram for rate 1/3, k = 3 convolutional code.
The self loop at node a can be eliminated because it does not contribute to the distance properties of a code sequence relative to all zero code sequence. Node a is split into two nodes (a and e in figure 10.61). One of these nodes (a) represents the input of the state diagram and the other one (e) represents the output of the state diagram of figure 10.61. After implementing all these modifications, we get the state diagram shown in figure 10.61.
FIGURE 10.61 State diagram for the rate 1/3, k = 3 convolutional code with distance labels on the branches.
Carefully observe the distance labels marked on various branches in figure 10.61. For example, for the branch from a to c the output is 111 (figure 10.60). Hence in figure 10.61, we have written a distance of p3 on this branch (ac). On the similar lines, the output for the branch b to a in figure 10.60 is 011. Hence, it has been shown as branch (be) in figure 10.61 and the corresponding distance labels is p2. Similarly, the other distance labels have been marked in figure 10.61.
The state diagram of figure 10.61 has five nodes (a to e) and the four state equations can be written as under :
Xc = p3 Xa + p Xb
Xb = p Xc + p Xd
Xd = p2 Xc + p2 Xd
Xe = p2 Xb                                                       …(10.58)
Here, it may be noted that these state equations have been written by considering the incident branches on each node. For example, for node c, the incident branches as ac and bc with distance labels of p3 and p respectively. Therefore, the state equation is
Xc = p3 Xa + p Xb                                            …(10.59)
The transfer function of the code is defined as,
Transfer function T(p) =
By solving the state equations given by equation (10.68), we obtain
T(p) =                                        …(10.60)
or                                             T(p) = p6 + 2p8 + 4p10 + 8p12 +…         …(10.61)
(i)         Transfer function of equation (10.61) tells us that the first term p6 represents a single path of Hamming distance p = 6. From figure 10.61 it is clear that the p = 6 path is acbe.
(ii)        The second term of equation (10.61) indicates that there are two paths of p = 8 from node a to node e. These paths are acdbe and acbcbe.
(iii)       The third term in equation (10.61) indicates that there are four paths of distance p = 10.
(iv)       The minimum distance of the code is called as the minimum free distance and is denoted by dfree. In this example, dfree = 6.
Thus, the transfer function gives us the distance properties of the convolutional code.
EXAMPLE 10.43. For the convolutional encoder arrangement shown in figure 10.62, draw the state diagram and hence trellis diagram. Determine output digit sequence for the data digits 11010100. What are the dimensions of the code (n, k) and constraint length ? Use viterbi’s algorithm to decode the following sequence :
100 110 111 101 001 101 001 010
            Compare the convenience of code tree, trellis diagram and state diagram from encoding and decoding of convolutional codes point of view.
Solution : Code Dimension
The code dimension is represented by (n, k, L) whet e
n = Number of encoded bits per message
k = Number of message bits taken at a time by the encoder
L = Encoder’s memory.
Here, n = 3, k = 1 and L = 2. Therefore, code dimension is given by (3, 1, 2) but if the code dimension is to be expressed as (n, k), then it is (3, 1)
FIGURE 10.62
Therefore, code dimension (n, k) = (3, 1)
Constraint Length
            The constraint length may be defined in following two different ways :
(i)         Constraint length K = n(L + 1) = 3(2 + 1) = 9
(ii)        The other definition states that constraint length is equal to the number of shifts over which a single message bit can influence the encoder output. Applying this definition, the constraint length, K = 3.
Code trellis for the encoder of figure 10.62.
The code trellis for the encoder of figure 10.62 is shown in figure 10.63. The code trellis of figure 10.63 are obtained as under :
(i)         If the current state S3 S2 = 00, i.e., a and if S1= 0 then after shifting all the contents through one bit, the next state will be S3 S2 = 00 i.e., a this is shown by a solid line. But, if S3, S2 = 00 = a is the initial state and if S1 = 1 then after shift, the state will be S3 S2 = 01 i.e., b, hence, this is shown by the dotted line emerging from a. This is how the diagram is completed by obtaining the next states-for each initial state.
FIGURE 10.63 Code trellis for the encoder in figure 10.62.
The state diagram
The state diagram obtained from the code trellis is shown in figure 10.64.
FIGURE 10.64 State diagram of the encoder in figure 10.62.
The Output Sequence
(i)         To write the generator polynomials, it is necessary to first write the generating sequences for and . According to figure 10.62, the generating sequence for  is denoted by  and is given by,
= [1, 0, 0]                                                             …(i)
This is because only S1 is used. Hence, 1 represents S1 and the two 0s represent S2 and S3.
Similarly, the generating sequence for  is  and is given by
= [1, 0, 1]                                                             …(ii)
This is because, to obtain  we have used S1 and S3. So, the two 1s represent S1 and S3 whereas the 0 represents the unused S2.
Similarly, the generating sequence for  is  and because S1 and S2 are involved in generation of , the generator sequence  is given by
= [1, 1, 0]                                                             …(iii)
The corresponding generator polynomials are given by :
G(1)(p) = g0(1) + g1(1)p + g2(1) p2
Therefore,        G(1)(p) = 1 + 0p + 0p2 = 1    [Substituting g0(1) = 1 and g1(1) = g2(1) = 0]
G(2)(p) = g0(2) + g1(2)p + g2(2) p2
or                                 G(2)(p) = 1 + 0p + 1p2 = 1 + p2
and                              G(3)(p) = g0(3) + g1(3)p + g2(3) p2
or                                 G(3)(p) = 1 + 1p + 0p2
or                                 G(3)(p) = 1 + p
Thus, the generator polynomials are given by :
G(1)(p) = 1
G(2)(p) = 1 + p2                                                                                           …(iv)
G(3)(p) = 1 + p
The message signal is given by m = [1 1 0 1 0 1 0 0]
Hence, the corresponding message polynomial is given by :
M(p) = 1 + 1p + 0p2 + 1p3 + 0p4 + 1p5 + 0p6 + 0p7
M(p) = 1 + p + p3 + p5                                                                              …(v)
(iii)       The codeword polynomials for the three outputs can be obtained by, multiplying the corresponding generator polynomial with the message polynomial. Hence, the three codeword polynomials are given by
X(1) (p) = G(1) (p) . M (p)
= 1 . [1 + p + p3 + p5]
Therefore, we have
X(1) (p) = 1 + p + p3 + p5                                                            …(vi)
(vi)       Hence, the corresponding codeword sequence at the first output is :
xi(1) = (1, 1, 0, 1, 0, 1)                                      …(vii)
Similarly, the codeword polynomial for the second output is given by
X(2) (p) = G(2) (p) . M (p)
X(2) (p) = (1 + P2) (1 + p + p3 + p5)
X(2) (p) = 1 + p + p3 + p5 + p2 + p5 + p7
Due to modulo-2 addition, we have
X(2)(p) = 1 + p + p2 + p7                                                      … (viii)
The corresponding codeword sequence at this output is given by
xi(2) = (1, 1, 1, 0, 0, 0, 0, 1)                                           …(ix)
And the codeword polynomial for the third output is given by
X(3)(p) = G(3) (p) . M (p)
or                                 X(3) (p) = (1 + P) (1 + p + p3 + p5)
or                                 X(3)(p) =  1 + p + p3 + p5 + p + p2 + p4 + p6
⸫                                 X(3)(p) = 1 +p2 + p3 + p4 + p5 + p6                                 …(x)
This corresponding codeword sequence at this output is given by
xi(3) = (1, 0, 1, 1, 1, 1, 1, 0)                                           …(xi)
The code sequences xi(1), xi(2) and xi(3) are obtained from equations (vii), (ix) and (xi) by appending zeros to make all the sequences 8-bit long. Hence, the code sequences are given by,
xi(1) = [1 1 0 1 0 1 0 0]
xj(2) = [1 1 1 0 0 0 0 1]
xi(3) = [1 0 1 1 1 1 1 0]
(iv)       The final codeword sequence may be obtained multiplexing these sequences as under :
{xi} = [111 110 011 101 001 101 001 010]
This is the required sequence.
Viterbi algorithm for decoding the given sequence
Figure 10.65 shows the diagram based on the viterbi decoding. The received signal Y is written at the top. The second (Y + E) sequence alongwith the message sequence is also shown at the bottom.
The maximum likelihood path with minimum running metric (3) is shown by dark line. The decoded message is given by,
m = (1 1 0 1 0 1 0 0)
The signal in each interval is obtained from the nature of the maximum likelihood path. When the path is solid, the message is 0 and when the path is dotted, the message is 1.
figure 10.65 Viterbi decoding for example 10.61.
EXAMPLE 10.44. Determine the state diagram for the convolutional encoder shown in figure 10.66. Draw the Trellis diagram through the first set of steady state transitions. On the second Trellis diagram, show the termination of the Trellis to all-zero state.
FIGURE 10.66
Solution :
FIGURE 10.67 Given encoder in redrawn form.
Let us assume that all the registers have been cleared so S1, S2 and S3 contain zeros in the beginning.
xl = xi(1) = S1 Å S2 Å S3                                                              …(i)
x2 = xi(2) = S1 Å S3                                                                                           …(ii)
and the encoder output = xl x2.
If the initial state is S3 S2 = 00 and if S1 = 0, then, we have
Output x1 = 0 Å 0 Å 0 = 0
x2 = 0 Å 0 = 0
Therefore,                    x1 x2 = 00 … if S1= 0
Next state                    S3 S2 = 00
But, if              S1 = 1 then
x1 = 1 Å 0 Å 0 = 1
and                              x2 = 1 Å 0 = 1
Therefore, we have x1 x2 = 11 … if S1 = 1
Next state S3 S2 = 01
If we define the four different states as under :
00 = a, 01 = b, 10 = c and 11 = d
Then, the state diagram is will be as shown in figure 10.68.
FIGURE 10.68 State diagram of the given decoder.
The code trellis in the steady state transition will be as shown in figure 10.63.
FIGURE 10.69
EXAMPLE 10.45. The encoder shown in figure 10.70 generates an all zero sequence which is sent over a binary symmetric channel. The received sequence 01001000…. There are two errors in this sequence (at second and fifth position). Show that this double error detection is possible with correction by application of viterbi algorithm.
FIGURE 10.70
Solution : The trellis diagram for the encoder shown in figure10.70 is shown in figure 10.71.
FIGURE 10.71 The trellis diagram.
From these trellis diagram, we have drawn the viterbi decoding diagram of figure 10.72. The input signal Y= 01 00 10 00… .
FIGURE 10.72 Viterbi diagram.
From the viterbi diagram, let us write the possible paths for each state a4, b4, c4 and d4 and the running path metric for each path is as shown below :

State Possible Paths Running Path Metric
a4             a0 – a1 – a2 – a3 – a4 2
              a0 – a1 – b2 – c3 – a4 5 x
b4             a0 – a1 – a2 – a3 – b4 4 x
              a0 – b1 – c2 – a3 – b4 5 x
              a0 – b1 – d2 – c3 – b4 5 x
              a0 – a1 – b2 – c3 – b4 3
c4             a0 – a1 – a2 – b3 – c4 3
              a0 – b1 – c2 – b3 – c4 4 x
              a0 – b1 – d2 – d3 – c4 3 x
              a0 – a1 – b2 – d3 – d4 6 x
d4             a0 – a1 – a2 – b3 – d4 3
              a0 – b1 – c2 – b3 – d4 4 x
              a0 – b1 – d2 – d3 – d4 3 x
              a0 – a1 – b2 – d3 – d4 6 x

Out of the possible paths listed above, we select four survivor paths having the minimum value of running path metric. The survivor paths are marked by (d) sign. They are as under :

S.No. Paths Path Metric
            a4 →    a0 – a1 – a2 – a3 – a4
b4 →    a0 – a1 – b2 – c3 – b4
c4 →    a0 – a1 – a2 – b3 – c4
d4 →    a0 – a1 – a2 – b3 – d4

Out of these survivor paths, the path having minimum running path metric equal to 2, i.e., the path (a0 – a1 – a2 – a3 – a4). Hence, the encoded signal corresponding to this path is given by
a4 d 00 00 00 00.
This is corresponding to the received signal 01 00 10 00.
This shows that Viterbi algorithm can correct the errors present in the received signal.
EXAMPLE 10.46. Figure 10.73 depicts a rate 1/2, constraint length N = 2, convolutional code encoder. Sketch the code-tree for the same.
Solution : The constraint length k = 2 and its rate is 1/2. This means that for a single input binary bit, two bits V1 and V2 are encoded at the output. S1 acts as input and S2 acts as the state. For S2, there are two possible values as under :
S2 = 0 …State a
S2 = 1 …State b
FIGURE 10.73
We assume that S1 and S2 both are zero. From figure 10.73, we can write that,
V1 = S 
V2 = S1 Å S2
The state diagram
Before obtaining the code tree, we have to draw the state diagram as under :
(i)         Let the initial contents S1 S2 = 00
(ii)        Now, we assume that a 0 is applied at the input. Then the next state is also 0 as shown in figure 10.74. The state remains the same, i.e., a and the encoder outputs are V1= 0 and V2 = 0
FIGURE 10.74
(iii)       Now, if the present state of the encoder is a and if the input is 1, then contents S1, S2 = 10. With input 1 the next state of the encoder will be b and V1 = V2 = 1 as shown in figure 10.75.
FIGURE 10.75.
(iv)       Similarly, we can obtain the outputs and states for other possible states as shown in figure 10.76 (a) and (b)
Taking the figures 10.74, 10.75 and 10.76 (a) and (b) into consideration, we can draw the state diagram as shown in figure 10.76 (c). The code tree is as shown in figure 10.76 (d). It has been drawn by referring to the state diagram in figure 10.76 (c).
FIGURE 10.76. (Contd…)
figure 10.76
EXAMPLE 10.47. For the convolutional encoder of figure 10.77, produce a state diagram, and from it produce the trellis diagram.
FIGURE 10.77 Given convolutional encoder.
Solution : Solve yourself.
EXAMPLE 10.48. For the input sequences from 0000 to 1111 and the circuit in figure 10.78 determine the output sequence.
FIGURE 10.78.
Solution :
(i)         First, we assume that shift register contents are 000 initially.
(ii)        Then, we assume that input message is 0001.
(iii)       Next, we obtain the values of V1 and V2 for each input bit.
(iv)       Lastly, we interleave the V1 and V2 bits to obtain the output sequence.
The solution has been shown in figure 10.79 (a) to 10.79(d).
(a)                                                        diagram
(b)                                                        diagram
(c)                                                        diagram
FIGURE 10.79 (Contd…)
(d)                                                       diagram
(e)                                                        diagram
FIGURE 10.79
EXAMPLE 10.49. For the given encoder shown in figure 10.80, obtain the convolutional code for the bit sequence 1 1 0 1 1 0 1 1 and decode it by constructing the corresponding code tree.
Solution : To obtain the convolutional code for the bit sequence 1 1 0 1 1 0 1 1, please go through the example 10.48.
The code tree for the given encoder has been shown in figure 10.81.
We follow the path showed by the thick arrows in the code tree of figure 10.81. The encircled numbers indicate the outputs V2 V1. Table 10.19 shows the input bits, state of the encoder, next state and encoder output.
FIGURE 10.80.
Table 10.19.

S.No. Input bit Encoder State
m2         m1
Next State
m2            m1
V1        V2
            1          1
0          1
0          1
0          0
0          1
0          1
0          0
0          1

FIGURE 10.81 Code tree for Example 10.49
Hence, the encoder output will be given by
Encoder output = V2 V1  V2 V1 V2 V1
Substituting the values of V2 V1 from table 10.19, we can write,
Encoder output = 10 00 10 10 00 10 10 11
EXAMPLE 10.50. Determine the code efficiency and code rate for the system given in example 10.49.
Solution : The code efficiency and code rate are one and the same.
Therefore, we write
Code rate = Code efficiency =
where k = Number of messages hits in a codeword.
n = Number of transmitted bits.
For the encoder given in the example 10.49, for every input message bit, two encoded bits V1 and V2 are transmitted.
Hence, we have
Code rate = Code efficiency =                      Ans.
EXAMPLE 10.51. A convolutional encoder has a single shift register with two stages, three modulo-2 address and an output multiplexer. The generator sequences of the encoder are as under :
g(1) = (1, 0, 1), g(2) = (1, 1, 0) and g(3) = (1, 1, 1)
Draw the block diagram of the encoder.
Solution : The generator sequences of the convolutional encoder are given.
The convolutional encoder has been shown in figure 10.82.
FIGURE 10.82. Encoder
EXAMPLE 10.52. For the convolutional encoder shown in figure 10.83, sketch the code tree.
FIGURE 10.83.
Solution : The values of c1, c2 and c3 are generated depending on the values of p1, p2 and p3 as under :
c1 = p1 Å p2 Å p3
c2 = p1
and                              c3 = p1 Å p2
The encoder output will be obtained by interleaving the bits c1, c2 and c3 for each message input.
Therefore, Encoder output = c1 c2 c3 c1 c2 c3 ….
Let p1 represent the message input and p2 p3 represent the state. The states are defined as under :
Encoder States

p3 p2 State

Let the initial contents of the shift register be – p1 p2 p3 = 0 0 0 and the initial state is a. If the first message bit is 0 then the shift register contents are 000. Hence the encoder state is 00 and the encoder outputs are c1 c2 c3 = 0 0 0. This has been shown in figure 10.84.
FIGURE 10.84 Development of code tree.
By following this procedure, we can obtain the complete code tree as shown in figure 10.85.
FIGURE 10.85 Code tree.
EXAMPLE 10.52. For the convolutional encoder shown in figure 10.86, sketch the code tree.
FIGURE 10.86.
Solution : Solve yourself. This example is identical as example 10.42.
EXAMPLE 10.53. For the convolutional encoder shown in figure 10.87 determine the encoder output produced by the message sequence 1011110. Construct the code tree and show how it can be used for finding the encoder output.
FIGURE 10.87.
Solution : To find the encoder output, we follow the procedure given in example.
EXAMPLE 10.54. (a) The generator polynomial of a (7.4) Cyclic code is x3 + x + 1. Construct the generator matrix for a systematic cyclic code and find the codeword for the message (1101) using the general matrix.
            (b)       Verify by division method.
            (c)        A convolutional code is described by
                                                g1 = [110],       g2 = [101], g3 = [111]
            (i)         Draw the encoder corresponding to this code.
            (ii)        Draw the state transition diagram for this code.
            (iii)       Draw the trellis diagram for this code.
Solution : (a) Let us first construct the generator matrix.
We know that the ith row of the generator matrix is given by
x(n-i) Å Ri(x) = Qi(x) G(x) ;       where i = 1, 2, …, k                 …(i)
It is given that the cyclic code is systematic (7, 4) code. Therefore, n = 7, k = 4 and (n – k) = 3. Substituting these values into the above expression, we obtain
x(7 -i) Å Ri(x) = Qi(x) . (x3 + x + 1) ;    where i = 1, 2, …,4
With i = 1, the above expression will be
x6 Å Ri(x) = Qi(x) (x3 + x + 1)                         …(ii)
Let us obtain the value of Qi(x). The quotient Qi(x) can be obtained by dividing x(n – i) by G(x). Therefore, to obtain Qi(x) , let us divide x6 by (x3 + x + 1). The division takes place as under :
Here, the quotient polynomial is
Qi(x) = x3 + x + i
and the remainder polynomial is
Ri(x) = x2 + 0x + 1
Substituting these values into equation (ii), we obtain
x6 Å Ri(x)  = (x3 + x + 1) (x3 + x + 1)
x6 + x4 + x3 + x4 + x2 + x + x3 + x + 1
= x6 + 0x5 + (1 Å 1) x4 + (1 Å 1) x3 + x2 + (1 Å 1) x + 1
= x6 + 0x5 + 0x4 + 0x3 + x2 + 0x + 1
Therefore, 1st Row polynomial Þ x6 + 0x6 + 0x4 + 0x3 + x2 + 0x + 1
and 1st Row Elements Þ 1 0 0 0 1 0 1
Using the same procedure, we can obtain the polynomials for the other rows of the generator matrix is under :
2nd Row Polynomial Þ x5 + x2 + x + 1
3rd Row Polynomial Þ x4 + x2 + x
4th Row Polynomial Þ x3 + x + 1
These polynomials can be transformed into the generator matrix as under :
x6   x5   x4   x3         x2   x1   x0
G =
This is the required generator matrix.
Now, let us obtain the parity check matrix [H].
The parity check matrix [H] is given by
H = [PT : I3×3]
The transpose matrix PT is given by interchanging the rows and columns of the P matrix.
This means that
PT =
Hence, the parity check matrix is given by,
H =
This is the required parity check matrix [H].
Also, the code vectors in the systematic from may be obtained as under:
X = MG
where                                      M = Message matrix
G = Generator matrix
Therefore, we have     X = [1 1 0 1]
or                                             X = [1 1 0 1 : 0 0 1]
This is the required codeword.
(b)        We know that the message polynomial M(x) is given by
M(x) = m3x3 + m2x2 + m1 x + 1
or                                             M(x) = x3 + x2 + 1
Now, we multiply M(x) by x(n-k).
Here,                           n – k = 3
Therefore,                    x3 M(x) = x3 (x3 + x2 + 1) = x6 + x5 + x3
Now, let us perform the division.
We divide x3 M(x) by G(x).
The remainder polynomial C(x) = 1.
Now, the codeword polynomial can be given as under :
X(x) = xn-k M(x) + C(x) = (x6 + x5 + 0x4 + x3 + 0x2 + 0x + 0) + 1
or                     X(x) = x6 + x5 + 0x4 + x3 + 0x2 + 0x + 1
Therefore, codeword X = (1 1 0 1 0 0 1).
This is same as the code vector obtained in part (a). Ans.
(c)        We can draw the convolutional encoder as under :
Given that :                 g1  = [110]
g2  = [101]
g3 = [111]
The required encoder will be as shown in figure 10.88.
FIGURE 10.88 Convolutional encoder