# convolutional Codes , block codes and convolutional codes , coding gain of convolutional codes

maximum likelihood decoding of convolutional codes , convolutional Codes , block codes and convolutional codes , coding gain of convolutional codes :-**Few Definitions Related to convolutional Codes**

**Code rate (r)**

The code rate of the encoder of figure 10.48 is expressed as

R = …(10.52)

Here, *k* = Number of message bits = 1*n* = Number of encoded bits per message bits = 2

Therefore, r = …(10.53)

**Constraint length (k)**

Each message bit influences a span of *n* (L + 1) successive output bits. The quantity *n* (L + 1) is called as the constraint length. It is measured in terms of encoded output bits. For the encoder of figure 10.48, the constraint length is 6 bits as n = 2 and L = 2. Here, L is the encoder’s memory measured in terms of input message bits.

**Code Dimension**

The code dimension of a convolutional code depends on *n, k* and *L*. Here *k* represents the number of message bits taken at a time by the encoder, *n* is the number of encoded bits per message bit and *L* is the encoder’s memory. The code dimension is therefore represented by (*n, k, L*)

For the encoder of figure 10.48, the code dimension is given by (2, 1, 2).**10.16.4. Time Domain Approach**

The time-domain behavour of a binary convolutional encoder may be defined in terms of *n*-impulse responses. Let the impulse response of the adder generating *x*_{1}, in figure 10.48, be given by, the sequence { } . Similarly, let the sequence { } denote the impulse response of the adder generating *x*_{2} in figure 10.48. These impulse responses are also called as **generator sequences** of the code.

Let (m_{0}, m_{1}, m_{2} …) denote the message sequence entering the encoder of figure 10.48 one bit at a time (starting from m_{0}). The encoder generates two output sequences by performing convolutions on the message sequences will the impulse responses.

The bit sequence x_{1}, is given by,**equation**

Similarly, the other bit sequence *x*_{2}, is given by,

equation

Then, these bit sequences are multiplexed with the help of the commutator switch to produce the following output :

X = { …} …(10.56)

where x_{1} = ……

and x_{2} = ……**EXAMPLE 10.40. The convolutional encoder of figure 10.49 has the following two generator sequences each of length 3.****(** **… ****) = (1, 1, 1)**** and {** **} = (1, 0, 1)**** Determine the encoded sequence for the following input message: **** (m _{0}, m_{l}, m_{2}, m_{3}, m_{4}) = (1 0 0 1 1) **

**diagram**

**FIGURE 10.49**

*Convolutional encoder*

**Solution :**The given encoder can be drawn in the standard form as shown in figure 10.50.

**diagram**

**FIGURE 10.50**

*Given encoder redrawn.*

The output sequence may be obtained by using equations (10.54) and (10.55) stated to obtain the bit streams

*x*

_{1}and

*x*.

_{2}**10.16.5. Transform-Domain Approach**

**We know that the convolution in time domain is transformed into the multiplication of Fourier transforms in the frequency domain. We can use this principle in the transform domain approach.**

In this process, the first step is to replace each path in the encoder by a polynomial whole coefficients are represented by the respective elements of the impulse response. For example, for the top-adder path in previous example, it is given that,

and

Therefore, the input top adder output path of the encoder of figure (10.50) can be expressed in terms of the polynomial as under :

G

^{(1)}(p) = g

_{0}

^{(1)}+ g

_{1}

^{(1)}p + g

_{2}

^{(1)}p

^{2}…(10.57)

Substituting the values we get,

G

^{(1)}(p) = 1 + p + p

^{2}…(10.58)

The general expression is given by,

G

^{(1)}(p) = g

_{0}

^{(1)}+ g

_{1}

^{(1)}p + g

_{2}

^{(1)}p

^{2}+ … + g

_{L}

^{(1)}p

^{L}…(10.59)

The variable p denotes a unit delay operation and the power of p defines the number of time units by which the associated bit in the impulse response has been delayd with respect to the first bit i.e., g

_{0}

^{(1)}.

Similarly the polynomial corresponding to the input-bottom adder-output path for the encoder shown in figure 10.50 is given by,

G

^{(2)}(p) = g

_{0}

^{(2)}+ g

_{1}

^{(2)}p + g

_{2}

^{(2)}p

^{2}…(10.60)

Substituting, g

_{0}

^{(2)}= 1, g

_{1}

^{(2)}= 0 and g

_{2}

^{(2)}= 1 we obtain,

G

^{(2)}(p) = 1 + p

^{2}…(10.61)

The general form of polynomial is given by,

G

^{(2)}(p) = g

_{0}

^{(2)}+ g

_{1}

^{(2)}p + g

_{2}

^{(2)}p

^{2}+ … + g

_{L}

^{(2)}p

^{L}…(10.62)

The polynomial G

^{(1)}(p) and G(2) (p) of equations (10.58) and (10.61) are called as the

**generator polynomials**of the code.

From the generator polynomials, we can obtain the codeword polynomials as under :

Code word polynomial corresponding to top adder is given by,

x

^{(1)}(p) = G

^{(1)}p . m(p) …(10.63)

where m(p) = Message polynomial

and the codeword polynomial corresponding to the bottom adder is given by,

x

^{(2)}(p) = G

^{(2)}p . m(p) …(10.64)

Once we get the two codeword polynomials, it is possible to obtain the corresponding output sequence by simply using the individual coefficients. This has been illustrated in the following example.

**EXAMPLE 10.41. Determine the codeword for the cyclic encoder of figure 10.49 for the message signal (10011), using the transform domain approach. The impulse response of the input top adder output path is (1, 1, 1) and that of the input bottom adder output Path is (1, 0, 1).**

**Solution :**First, let us write the generator polynomial G

^{(1)}(p).

The impulse response of the input top adder output path of the convolutional encoder of figure 10.49, Therefore, we have

g

_{0}

^{(1)}= 1

g

_{1}

^{(1)}= 1

and g

_{2}

^{(1)}= 1

Therefore, the generator polynomial G

^{(1)}(p) is given by,

G

^{(1)}(p) = g

_{0}

^{(1)}+ g

_{1}

^{(1)}p + g

_{2}

^{(1)}p

^{2}

or G

^{(1)}(p) = 1 + p + p

^{2}…(i)

The given message is (m

_{0}m

_{1}m

_{2}m

_{3}m

_{4}= 1 0 0 1 1). Therefore, the message polynomial is given by,

M(p) = m

_{0}+ m

_{1}(p) + m

_{2}p

^{2}+ m

_{3}p

^{3}+ m

_{4}p

^{4}

or M(p) = 1 + p

^{3}+ p

^{4}…(ii)

Now, we find the code word polynomial for the top adder.

x

^{(1)}(p) = G

^{(1)}(p) . M(P)

x

^{(1)}(p) = (1 + p + p

^{2}) (1 + p

^{3}+ p

^{4})

= 1 + p

^{3}+ p

^{4}+ p + p

^{4}+ p

^{5}+ p

^{2}+ p

^{5}+ p

^{6}

or x

^{(1)}(p) = 1 + p + p

^{2}+ p

^{3}+ (1 + 1) p

^{4}+ (1 + 1) p

^{5}+ p

^{6}

or x

^{(1)}(p) = 1 + p + p

^{2}+ p

^{3}+ p

^{6}…. (since addition is Mod-2) …(iii)

The generator polynomial G

^{(2)}(p)

The impulse response of the input bottom adder output path of convolutional encoder of figure 10.49 is (1, 0, 1). Therefore,

g

_{0}

^{(2)}= 1, g

_{1}

^{(1)}= 0 and g

_{2}

^{(2)}= 1

Therefore the generator polynomial G

^{(2)}is given by,

G

^{(2)}(p) = g

_{0}

^{(2)}+ g

_{1}

^{(2)}p + g

_{2}

^{(2)}p

_{2}

or G

^{(2)}(p) = 1 + p

^{2 }…(iv)

Codeword polynomial for the bottom adder.

*x*

^{(2)}(p) = G

^{(2)}(p) . M(p)

Substituting equations (ii) and (iv) we get,

x

^{(2)}(p) = (1 + p

^{2}) (1 + p

^{3}+ p

^{4})

or x

^{(2)}(p) = 1 + p

^{2}+ p

^{3}+ p

^{4}+ p

^{5}+ p

^{6}…(v)

To obtain the code sequences

The output sequence at the output of the top-adder can be obtained from the corresponding generator polynomial x

^{(1)}(p),

x

^{(1)}(p) = 1 + p + p

^{2}+ p

^{3}+ p

^{6}= 1 + p + p

^{2}+ p

^{3}+ 0p

^{4}+ 0p

^{5}+ p

^{6}

Therefore the corresponding code seqeunce is (1 1 1 1 0 0 1)

Similarly the code sequence for the bottom adder can be obtained from its generator polynomial.

x

^{(2)}(p) = 1 + p + p

^{2}+ p

^{3}+ p

^{4}+ p5 + p6

or x

^{(2)}(p) = 1 + 0p + p

^{2}+ p

^{3}+ p

^{4}+ p5 + p6

Thus, corresponding code sequence is (1 0 1 1 1 1 1)

Hence, the final code word at the output of the encoder is obtained by multiplexing (interleaving) the two code sequences.

Codeword = 11 10 11 11 01 01 11

**Advantage of Transform Domain Approach**

Note that the codeword obtained using the time domain approach and the frequency domain approach. It may be observed that we got the same result. But, the computation using transforts domain approach demands less efforts than the time-domain approach.

**10.16.6. Graphical Representation for Convolutional Encoding**

For the convolutional encoding, there an three different graphical representations that are widely used. They are related to each other. The graphical representations are as under :

(i) The code tree

(ii) The code trellis

(iii) The state diagram.

Let us discuss them one by one.

**10.16.7. The code Tree**

In this subsection, let us draw the code tree for the (2, 1, 2) cyclic encoder of figure 10.48. We assume that the register has been cleared so that it contains all zeros, when the first message bit m

_{0}arrives. Therefore, the initial state is m

_{-2}m

_{-1}= 00. Now, if m

_{0}= 0, then, the encoder output are as under :

x

_{1}= m

_{0}Å m

_{1}Å m

_{2}….. According to equation 10.50)

If m

_{0}= 0, we have

x

_{1}= 0 Å 0 Å 0 = 0 …..

Also x

_{2}= m

_{0}Å m

_{2}= 0 Å 0 = 0

Therefore, x

_{1}x

_{2}= 0 0 ……if m

_{0}= 0

But, if m

_{0}= 1, then we have

x

_{1}= 1 Å 0 Å 0 = 1

and x

_{2}= 1 Å 0 = 1

Therefore x

_{1}x

_{2}= 11 … if m

_{0}= 1

The code tree has been drawn in figure 10.51. It begins at a branch point on node a which represents the initial state. So, if m

_{0}= 0 we should take the upper branch from node a to obtain the output 00 and the next state m

_{-1}m

_{0}= 00.

**diagram**

**figure 10.51**

*Code tree for (2, 1, 2) encoder.*

Therefore, even this state is labeled as

*a*. But, if m

_{0}= 1, then we should take the lower branch from

*a*to obtain the output 11 and the next state m

_{1}m

_{0}= 01. This state has been labelled as

*b*. The code tree progresses in this manner for each new message bit.

The nodes are labeled with letters

*a, b, c*or

*d*denoting the current state m

_{2}, m

_{1}, and we go up or down from a node depending on the value of m

_{0}. Each branch shows the resulting encoded output x

_{1}x

_{2}calculated using equation (10.50). Each branch will terminate at another node which is labelled with the next state. There are 2

*possible branches for the*

^{j}*j*th massage bit, but the branch pattern begins to repeat at

*j*= 3 since the register length is L + 1 = 3.

The procedure of drawing the code tree has been illustrated in figure 10.52.

**diagram**

**diagram**

**diagram**

**FIGURE 10.52**

*Operation of the (2, 1, 2) encoder and development of code tree.*

**10.16.8. Code Trellis**

Figure 10.53 shows a more compact graphical representation which is popularly known as code trellis. Here, the nodes on the left denote the four possible current states and the nodes on the right are the resulting next state.

**diagram**

**FIGURE 10.53**

*Code trellis for the (2, 1, 2) convolutional encoder.*

A solid line represents the state transition or branch for m

_{0}= 0 and the dotted line represents the branch for m

_{0}= 1. Each branch is labelled with the resulting output bits x

_{1}x

_{2.}

**10.16.9. State Diagram**

Figure 10.54 shows a state diagram for the encoder of figure 10.50. We can obtain this state diagram from the code trellis, by coalencing the left and right sides of trellis. The self loops at the nodes

*a*and

*d*represent the state transition

*a-a*and

*d-d*.

**diagram**

**FIGURE 10.54**

*State diagram for the (2, 1, 2) convolutional encoder of figure 10.50.*

A solid line represents the state transition for m

_{0}= 1 and the dotted line represents the state transition for m

_{0}= 1. Each branch is labelled with the resulting output bits x

_{1}, x

_{2}. If the sequence of Message bits and the initial state is given to us then we can use either the code trellis or state diagram to find the resulting state sequence and output bits. This procedure has been illustrated in figure 10.55 starting at initial state

*a*.

STATE a b d c b d d c

OUTPUT 11 01 01 00 01 10 01 11

**FIGURE 10.55**

*Procedure to draw the trellis or state diagram.*

## हिंदी माध्यम नोट्स

**Class 6 **

Hindi social science science maths English

**Class 7**

Hindi social science science maths English

**Class 8**

Hindi social science science maths English

**Class 9 **

Hindi social science science Maths English

**Class 10**

Hindi Social science science Maths English

**Class 11 **

Hindi sociology physics physical education maths english economics geography History

chemistry business studies biology accountancy political science

**Class 12 **

Hindi physics physical education maths english economics

chemistry business studies biology accountancy Political science History sociology

## English medium Notes

**Class 6 **

Hindi social science science maths English

**Class 7**

Hindi social science science maths English

**Class 8**

Hindi social science science maths English

**Class 9 **

Hindi social science science Maths English

**Class 10**

Hindi Social science science Maths English

**Class 11 **

Hindi physics physical education maths entrepreneurship english economics

chemistry business studies biology accountancy

**Class 12 **

Hindi physics physical education maths entrepreneurship english economics