Code Word , Code Rate , Code Vectors , Hamming Distance , Hamming Weight of a Code Word [w(x)]

IMPORTANT DEFINITIONS RELATED TO CODES

  1. Code Word

The code word is the n bit encoded block of bits. As already discussed, it contains message bits and panty or redundant bits.

  1. Code Rate

The code rate is defined as the ratio of the number of message hits (k) to the total number of bits in a code word.
⸫                                 Code rate (r) =                                                                 …(10.2)

  1. Code Vectors

In previous article, we have used the code bits and code vectors as similar terms. Let us now define the code vectors. We can visualize an n-bit code word to be present in an n-dimensional space. The coordinates or elements of this code vector are the bits present in the code word. We consider figure 10.14 which shows the 3-bit code vectors.
Table 10.2 enlists the 8 possible combinations of the 3-bit code word. We can assume bits xo to be on the X-axis, bits x1 to be on the Y-axis and bits x2 to be on the Z-axis. Then figure 10.17 shows the various 3-bit code vectors.
diagram
FIGURE 10.17 Code vectors for 3-bit code words.

  1. Hamming Distance

Let us consider two code vectors (or code words) having the same number of elements. The Hamming distance or simply distance between the two code words is defined as the number of locations in which their respective elements differ. For example, let us consider the two code words given below:
n

code word No. 1         1          1          0          1          0          1          0          0
↕                                              ↕                      ↕
code word No. 2         0          1          0          1          1          1          1          0
 
Hamming
distance = 3
 

 

  1. Hamming Weight of a Code Word [w(x)]

The Hamming weight of a code word x is defined as the number of non-zero elements in the code word. Hamming weight of a code vector (code word) is the distance between that code word and an all-zero code vector (A code having all elements equal to zero).

  1. Code Efficiency

The code efficiency is defined as the ratio of message bits to the number of transmitted bits per block.
Therefore, code efficiency = code rate =                                    … (10.3)
EXAMPLE 10.3. Find the Hamming weight of the following code vector :
x = 1 1 0 1 0 1 0 0
Solution : As the number of non-zero elements in the above code word is 4, the Hamming weight w(x) = 4.
Minimum distance dmin
The minimum distance dmin of a linear block code is defined as the smallest Hamming distance between any pair of code vectors in the code.
Therefore, the minimum distance is same as the smallest Hamming weight of difference between any pair of code vectors. It can be proved that the minimum distance of a linear block code is the smallest Hamming weight of the non-zero code vectors in the code.
Rate of dmin in Error detection and Correction
            The error detection is always possible when the number of transmission errors in a code word is less than the minimum distance dmin because then the erroneous word is not a valid code word. But when the number of errors equals or exceeds dmin, the erroneous code word may correspond to another valid code word and errors cannot be detected. The error detection and correction capabilities of a coding technique depend on the minimum distance dmin as shown in Table 10.3.
Table 10.3. Error detection and correction capabilities

S.N. Description Expression
1 Detect upto s errors per word dmin ≥ (s + 1)
2 Correct upto t errors per word dmin > (2t + 1)
3 Correct upto t errors and detect s > t errors per word dmin ≥ (t + s + 1)

 

DO YOU KNOW?
The use of error-control coding adds complexity to the system, especially, for the implementation of decoding operations in the receiver.

EXAMPLE 10.4. For a Hamming distance of 5, how many errors can be detected ? How many errors can be corrected ?
Solution : Let us consider table 10.3. Assuming minimum Hamming distance i.e. dmin = 5.
(i)         Number of errors that can be detected (s) can be obtained from
(s + 1) ≤ dmin
or                                             s ≤ 5 – 1
or                                             s ≤ 4.               Ans.
Thus, at the most 4 errors can be detected.
(ii)        Number of errors that can be corrected (t) can be obtained from
(2t + 1) < dmin
or                                             t ≤ 2.                Ans.
Thus, at the most, 2 errors can be corrected.
10.11   LINEAR BLOCK CODES

  1. Now, let us discuss the linear block codes in detail and obtain various mathematical expressions related to these codes.
  2. We consider an (n, k) linear block code in which k number of hits are identical to the message sequence which is to be transmitted.
  3. The remaining (n – k) number of hits are called as the generalized parity check bits or simply parity bits.

diagram
figure 10.18 Structure of the code word for a linear block code

  1. These parity bits are computed from the message bits, according to the prescribed encoding rule which decides the mathematical structure of the code.
  2. Let us cosnsider figure 10.18, which shows the structure of a code word.
  3. A code word consists of k message bits which are denoted by m0, m1, …,mk-1 and (n – k) parity bits denoted by c0, c1, …,cn-k-1.
  4. The sequence of message bits is applied to a linear block encoder to produce an n bit code word. The elements of this code word are x0, x1, …, xn – 1
  5. As shown in figure 10.18, the first k bits of the code word are identical to the corresponding parity bits (c0 c1 ….). We can express this mathematically as under:

equation

  1. The (n- k) parity bits are linear sums of the k message bits as well be discussed later on.
  2. The code vector represented by equation (10.4) can be mathematically represented as :

X = [M : C]                                                            …(10.5)
where                                      M = k – message vectors
and                                          C = (n – k), parity vectors

  1. A block code generator generates the parity vectors (or parity bits) required to be added to the message bits to generate the code words. The code vector X can also be represented as under :

X = MG                                                                … (10.6)
where                          X= Code vector of 1 x n size
M = Message vector of 1 x k size
and                                          G = Generator matrix of k x n size.
Equation (10.6) can be represented in the matrix form as under :
[X]1 x n = [M]1 x k [G]k x n                                         …(10.7)

  1. The generator matrix is dependent on the type of linear block code used. The generator matrix is generally represented as under:

[G] = [Ik | P]                                                           …(10.8)
where                                      Ik = k x k identity matrix
and                                          P = k x (n – k) coefficient matrix.
Therefore,                                Ik =                                                   and
and                                          P= … (10.9)

  1. The parity vector can be obtained as under:

C = MP                                                                       …(10.10)
Substituting the matrix form, we obtain
[c0, c1 …cn-k] 1 x n-k = [m0, m1 …mk] 1xk (10.11)
By solving the above matrix equation, we get the parity vectors as under:
c0 = m0 P00 Å m1 P01 Å m2 P02 Å … Å mk P0,k-1
c1 = m0 P10 Å m1 P11 Å m2 P12 Å … Å mk P1,k-1                 … (10.13)
c2 = m0 P20 Å m1 P21 Å m2 P22 Å … Å mk P2,k-1 
Similarly, we can obtain the expressions for the remaining parity bits.

Leave a Reply

Your email address will not be published. Required fields are marked *