schwarz inequality proof , formula in quantum mechanics , SCHWARZ INEQUALITY | definition derivation full in detail in digital communication .

**SCHWARZ INEQUALITY**

Let us consider any pair of energy signals s_{i}(t) and s_{2}(t). The Schwarz inequality states that

**equation**

The equality holds if and only if s_{2}(t) = cs_{1}(*t*), where c is any constant.

**Proof:**

** **To prove this inequality, let s_{1}(t) and s_{2}(t) be expressed in terms of the pair of orthonormal basis function _{1}(t) and _{2}(t) as under:

s_{1}(t)= s_{11}_{1}(t) + s_{12} _{2}(t)

s_{2}(t) = s_{21}_{1}(t) + s_{22}_{2}(i)

where _{1}(t) and _{2}(t) satisfy the orthonormality conditions over the entire time interval (—, ), i.e.

**equation**

On this basis, we may represent the signals s_{1}(t) and s_{2}(t) by the following respective pair of vectors, as illustrated in figure 7.5.

s_{1} =

s_{2} =

From figure 7.5, we observe that angle substended between the vectors s_{1} and s_{2} is

**diagram**

**FIGURE 7.5 **Vector representations of signals s_{i}(t) and s_{2}(t), providing the background picture for proving the Schwarz inequality.

**EQUATION**

or **EQUATION**

where we have used equations (7.15), (7.13) and (7.9). Recognizing that

**|** cos **| **≤ 1, the Schwarz inequality of equation (7.16) immediately follows from equation (7.17). Moreover, from the first line of equation (7.17), it may be noted that **|** cos **| **= 1 if and only if s_{2} = cs_{i}, i.e., s_{2}(t) = s_{2}(t) = cs_{1}(t), where c is an arbitrary constant.

The proof of the Schwarz inequality applies to real-valued signals. It may be readily extended to complex-valued signals, in which case equation (7.16) us reformulated as under:

**EQUATION**

where the equality holds if and only if s_{2}(t) = cs_{1}(t), where c is a constant.

**7.6 GRAM-SCHMIDT ORTHOGONALIZATION PROCEDURE (Expected)**

After discussing geometric representation of energy signals, let us now discuss the Gram-Schmidt orthogonalization procedure for which we require a complete-orthonormal set of basis functions. To start, let us assume that we have a set of

*M*energy signals represented by s

_{1}(t), s

_{2}(t), …s

_{M}(t).

If, we start with s

_{1}(t), which chosen from this set arbitrarily, the first basis function is defined as

where E

_{1}= energy of the signal s

_{1}(t)

From equation (7.19), we write

s

_{1}(t) =

or s

_{1}(t) = s11

where coefficient s

_{11}=

has unit energy, as required.

Next, using the signal s

_{2}(t), we can define the coefficient s

_{21}as

**equation**

Therefore, we can introduce a new intermediate function

g

_{2}(t) =s

_{2}(t) – s

_{21}

_{1}(t) …(7.22)

which is orthogonal to

_{1}(t) over the internal 0 ≤ t ≤ T by equation (7.21) and by the fact that the basis function

_{1}(t) has unit energy.

Now, we can define the second basis function as

**equation**

Now, substituting equation (7.22) into equation (7.23) and simplifying, we get the desired result as under:

**equation**

where E

_{2}is the energy of the signal s

_{2}(t).

From equation (7.23), it is obvious that

and, from equation (7.24), it is obvious that

This means that

_{1}(t) and

_{2}(t) form an orthonormal pair, as required.

Depending upon the above discussion, we may write the general form as under:

**equation**

where the coefficients s

_{ij}are defined as

**equation**

It may be noted that equation (7.22) is a special case of equation (7.25) with

*i*= 2. Further, for

*i*= 1, the function g

_{i}(t) reduces to s

_{i}(t).

Given the g

_{i}(t), now, we can define the set of basis functions as under:

**equation**

which forms on orthonormal set.

Further, the dimension

*N*is less than or equal to the number of given signals,

*M*depending on one of two possibilities:

(i) The signals s

_{1}(t) s

_{2}(t), …,s

_{M}(t) form a linearly independent set, in which case N = M.

(ii) The signals s

_{1}(t), s

_{2}(t), …, s

_{M}(t) are not linearly independent, in which case, N < M, and the intermediate function g

_{i}(t) is zero for i > N.

**NOTE : It may be noted that the conventional Fourier series expansion or a periodic signals is an example of a particular expansion of this type. Also, the representation of a bandlimited signal in terms of its samples taken at the Nyquist rate may be viewed as another example of a particular expansion of this type. There are, however, two [important following distinctions that should be made:**

(i) The form of the basis function

_{1}(t),

_{2}(t) …

_{N}(t) has not been specified. This means that unlike the Fourier series expansion of a periodic signal or the sampled representation of a band-limited signal, we have not restricted the Gram-Schmidt orthogonalization procedure to be in terms of sinusoidal functions or sinc functions of time.

(ii) The expansion of the signal s

_{i}(t) in terms of a finite number of terms is not an approximation wherein only the the first

*N*terms are significant but rather an exact expression where

*N*and only

*N*terms are significant.

**EXAMPLE 7.1. Given the signals s**

_{1}(t), s_{2}(t), s_{3}(t) and s_{4}(t) shown in figure 7.6(α). Use the Gram-Schmidt orthogonalization procedure to find an orthonormal basis for the set of signals.*(Very Important)***Solution:**

**Step 1:**Let us note that the energy of signal s

_{1}(t) is given by

**equation**

Therefore, the first basis function

_{1}(t) will be

**equation**

**Step 2:**We know that

**equation**

The energy of signal s

_{2}(t) is

**equation**

**DIAGRAM**

**FIGURE 7.6**

*(a) Set of signals to be orthonormalized*

**diagram**

**FIGURE 7.6**

*(b) The resulting set of orthonormal functions.*

Therefore, the second basis function

_{2}(t) will be

**equation**

or

**equation**

**Step 3:**Again, we know that

**equation**

and the coefficient s

_{32}equals

**EQUATION**

or

**EQUATION**

The corresponding value of the intermediate function g

_{1}(t), with

*i*= 3 is, therefore

g

_{3}(t) = s

_{3}(t) – s

_{31}

_{1}(t) – s

_{32}

_{2}(t)

**EQUATION**

Further, we know that the third basis function

_{3}(t) will be expressed as

**equation**

Finally, with

*i*= 4, we find that g

_{4}(t)= 0 and the orthogonalization process is completed.

The three basis functions

_{1}(t),

_{2}(t), and

_{3}(t) form an .orthonormal set, as shown in figure 7.6(b). In this example, we thus have M = 4 and N = 3, which means that the four signals s

_{1}(t), s

_{2}(t), s

_{3}(t) and s

_{4}(t) described in figure 7.6(α) do not form a linearly independent set. This is readily confirmed by nothing that s

_{4}(t) = s

_{1}(t) + s

_{3}(t). Moreover, we note that any of these four signals can be expressed as a linear combination of the three basis functions, which is essence of the Gram-Schmidt orthogonalization procedure.

■ This signal s

_{i}(t) occupies the full duration

*T*allotted to symbol m

_{i}. Further, s

_{i}(t) is a real-valued energy signal as given by

Energy E

_{i}= (t) dt, for

*i*= 1, 2, …M

■ Now, the charnel is assumed to have following two characteristics:

(i) the channel is linear with a bandwidth that is wide enough to accommodate the transmission of signal s

_{i}(t) with negligible or no distortion.

(ii) the channel noise, (t), is the simple function of a zero-mean white Gaussian noise process. At this stage, it may be noted that the reasons for second assumption are that it makes receiver calculations very easy and also, it is reasonable description of the type of noise present in several practical communication systems. Such a channel is popularly known as an additive white Gaussian noise (AWGN) channel.

■ The requirement is to design the receiver so as to minimize the average probability of symbol error.

■ This average probability of symbol error may be defined as

where, m

_{i}= transmitted symbol,

= estimate produced by the receiver, and

= the conditional error probability given that the

*i*th symbol was sent.

■ In geometric representation of signals, we represent any set of

*M*energy signals {s

_{i}(t)} as linear combinations of

*N*orthonormal basis functions, where N ≤ M.

This means that given a set of real-valued signals s

_{1}(t), s

_{2}(t), …, s

_{M}(t), each of duration

_{T}seconds.

■ Each signal in the set {s

_{i}(t)} is completely determined by the vector of its coefficients.

s

_{i}= , i = 1, 2, …,M

Here, the vector s

_{i}is called a signal vector. Also, if we extend the conventional notion of two and three dimensional Euclidean spaces to an

*N*-dimensional Euclidean space, the set of signal vectors {s

*i*

**|**

*i*= 1, 2, … M} may be viewed as defining a corresponding set of

*M*points in an

*N*-dimensional. Euclidean space, with mutually perpendicular axes labeled

_{1},

_{2}, …,

_{N}.

In fact, this N-dimensional Euclidean space is called the signal space.

■ The idea of visualizing a set of energy signals geometrically is of utmost importance. In fact, it provides the mathematical basis for the geometric representation of energy signals and hence giving the way for the noise analysis of digital communication systems in a much satisfied manner.