discrete memoryless channel in information theory ?

**THE DISCRETE MEMORYLESS CHANNELS (DMC)**

** **In this section, let us discuss different aspects related to discrete memoryless channels (DMC).

**9.8.1. Channel Representation**

** **A communication channel may be defined as the path or medium through which the symbols flow to the receiver end. *A* discrete mamoryless channel (DMC) is a statistical model with an input *X* and an output *Y* as shown in figure 9.1. During each unit of the time (signaling interval), the channel accepts an input symbol from *X*, and in response it generates an output symbol from *Y*. The channel is said to be “discrete” when the alphabets of *X* and *Y* are both finite. Also, it is said to be “memoryless” when the current output depends on only the current input and not on any of the previous inputs.

A diagram of a DMC with m inputs and *n* outputs has been illustrated in figure 9.1. The input *X* consists of input symbols x_{1}, x_{2}, …x_{m}. The a priori probabilities of these source symbols P(x_{i}) are assumed to be known. The outputs *Y* consists of output symbols y_{1}, y_{2}, …y_{n}. Each possible input-to-output path is indicated along with a conditional probability P(y_{j}|x_{i}), where P(y_{j}|x_{i}) is the conditional probability of obtaining output y_{j}, given that the input is x_{i}, and is called a **channel transition probability.**

**diagram**

**FIGURE 9.1** *Representation of a discrete memoryless channel (DMC).*

**9.8.2. The Channel Matrix**

** **A channel is completely specified by the complete set of transition probabilities. Accordingly, the channel in figure 9.1 is often specified by the matrix of transition probabilities [P(Y|*X*)]. This matrix is given by

**equation**

This matrix [P(Y|*X*)] is called the **channel matrix**.

Since each input to the channel results in some output, each row of the channel matrix must sum to unity. This means that

**equation**

Now, if the input probabilities P(X) are represented by the row matrix, then we have

[P(X)] = [P(x_{1}) P(x_{2})… P(x_{m})] …(9.15)

Also, the output probabilities P(Y) are represented by the row matrix as under:

[P(Y)] = [P(y_{1}) P(y_{2}) … P(y_{n}]) …(9.16)

then [P(Y)] = [PX] [P(Y|X)] …(9.17)

Now, if P(X) is represented as a diagonal matrix, then we have

**equation **…(9.18)

then [P(Y)] = [P(X)] [P(Y|X)] …(9.19)

where the (i, j) element of matrix [P(X, Y)] has the form P(x_{i}, y_{i}).

The matrix [P(X, Y)] is known as the joint probability matrix, and the element P(x_{i}, y_{i}) is the joint probability of transmitting x_{i} and receiving y_{i}.

**9.9 TYPES OF CHANNELS**

Other than continuous and discrete channels, there are some special type of channels with their own channel matrices. In this subsection, let us discuss various special channels.

**9.9.1. Lossless Channel**

** **Achannel described by a channel matrix with only one non-zero element in each column is called a lossless channel. An example of a lossless channel has been shown in figure 9.2, and the corresponding channel matrix is given in equation (9.20) as under:

**equation**

**diagram**

**FIGURE 9.2** *Lossless Channel.*

It can be shown that in the lossless channel, no source information is lost in transmission.

**9.9.2. Deterministic Channel**

** **A channel described by a channel matrix with only one non-zero element in each row is called a deterministic channel. An example of a deterministic channel has been shown in figure 9.3, and the corresponding channel matrix is given by equation (9.21) as under:

…(9.21)

**diagram**

**FIGURE 9.3*** Deterministic Channel.*

**NOTE:** It may be noted that since each row has only one non-zero element, therefore, this element must be unity by equation (9.14). Thus, when a given source symbol is sent in the deterministic channel, it is clear which output symbol will be received.

**9.9.3. Noiseless Channel**

A channel is called noiseless if it is both lossless and deterministic. A noiseless channel has been shown in figure 9.4. The channel matrix has only one element in each row and in each column, and this element is unity. Note that the input and output alphabets are of the same size, that is, m = n for the noiseless channel.

**DIAGRMA**

**FIGURE 9.4*** Noiseless channel.*

The matrix for a noiseless channel is given by

**9.9.4. Binary Symmetric Channel (BSC)**

** **The binary symmetric channel (BSC) is defined by the channel diagram shown in figure 9.5, and its channel matrix is given by

[P(Y|X)] = …(9.22)

A BSC channel has two inputs (x_{1} = 0, x_{2} = 1) and two outputs (y_{1} = 0, y_{2} = 1). This channel is symmetric because the probability of receiving a 1 if a 0 is sent is the same as the probability of receiving a 0 if a 1 is sent. This common transition probability is denoted by *p* as shown in figure 9.5.

**diagram**

**FIGURE 9.5** Binary symmetrical channel

**EXAMPLE 9.19. Given a binary channel shown in figure 9.6. **

** (i) Find the channel matrix of the channel. **

** (ii) Find P(y_{1}) and P(y_{2}) when P(x_{1}) = P(x_{2}) = 0.5. **

**(iii) Find the joint probabilities P(x**

_{1}, y_{2}) and*P*(x_{2}, y_{1}) when*P*(x_{1}) = P(x_{2}) = 0.5.*(PTU-2001)***diagram**

**FIGURE 9.6**

Solution: (i) We know that the channel matrix is given by

[P(Y|X)] =

Substituting all the given values, we get

[P(Y|X)] =

(ii) We know that

[P(Y)] = [P(X)] [P(Y|X)]

Substituting all the values, we get

or [P(Y)] = [0.5 0.5]

or [P(Y)] = [0.55 0.45] = [P(y

_{1}) P(y

_{2})]

Hence, P(y

_{1}) = 0.55 and P(y

_{2}) = 0.45

**Ans.**

DO YOU KNOW? |

The capacity of a noisy (discrete, memoryless) channel is defined as the maximum possible rate of information transmission over the channel. |

(iii) Again, we have

[P(X, Y)] = [P(X)]_{d} [P(Y|X)]

Substituting all the values, we get

[P(X,Y)] =

Or [P(X, Y)] = *…(i)*

Now, let us write standard matrix form as under:

or [P(X, Y)] = *…(ii)*

Comparing (i) and (ii), we get

P(x_{1}, y_{2}) = 0.05 and P(x_{2}, y_{1}) = 0.1. **Ans.**

**EXAMPLE 9.20. Two binary channels of problem 9.19 are connected in cascade as shown in figure 9.7 **

**diagram**

**FIGURE 9.7*** Two binary channels in cascade.*

** (i) Find the overall channel matrix of the resultant channel, equivalent channel diagram. **

** (ii) Find P(z _{1}) and P(z_{2}) when P(x_{1}) = P(x_{2}) = 0.5 **

**Solution:**(i) We know that

[P(Y)] = [P(X)] [P(Y|X)] = [P(Y)] [P(Z|Y)]

[P(Z)] = [P(X)] [P(Y|X)] [P(Z|Y)] = [P(X)] [P(Z|X)]

Thus, from figure 9.7, we have

[P(Z|X)] = [P(Y|X)] [P(Z|Y)]

Substituting all the values, we get

[P(Z|X)] = =

The resultant equivalent channel diagram has been xi shown in figure 9.8.

(ii) Again, we have [P(Z)] = [P(X)] [P(Z|X)] Substituting all the values, we get

[P(Z)] = [0.5 0.5] = [0.58 0.415] …

*(i)*

Let us write standard expression as under:

[P(Z)] = [P(z

_{1}) P(z

_{2})] …(ii)

Comparing equation (i) and equation (ii), we obtain

P(z

_{1}) = 0.585 and P(z

_{2}) = 0.415.

**Ans.**

**diagram**

**FIGURE 9.8**

*The resultant equivalent channel diagram.*

**EXAMPLE 9.21. A channel has the following channel matrix:**

**[P(Y|X)] =**

**…(i)**

**(i) Draw the channel diagram.**

**(ii) If the source has equally likely outputs, compute the probabilities associated with the channel outputs for p = 0.2. (Haryana-1998)**

**Solution:**(i) The channel diagram has been shown in figure 9.9. It may be noted that the channel represented by equation (i) is known as the binary erasure channel.

The binary erasure channel has two inputs x

_{1}= 0 and x

_{2}= 1 and three outputs y

_{1}= 0, y

_{2}= e, and y

_{3}= 1, where e indicates an erasure. This means that the output is in doubt, and hence it should be erased.

(ii) We know that [P(Y)] is expressed as

[(Y)] = [P(X)] [P(Y)] [P(Y|X)]

**figure 9.9**

*Binary erasure channel*

Substituting all the values, we get

[P(Y)] = [0.5 0.5]

Simplifying, we get

[P(Y)] = [0.4 0.2 0.4]

Thus, we get

*I*(y

_{1}) = 0.4,

P(y

_{2}) = 0.2,

and P(y

_{3}) = 0.4

**Ans.**