SCHWARZ INEQUALITY | definition schwarz inequality proof , formula in quantum mechanics

schwarz inequality proof , formula in quantum mechanics , SCHWARZ INEQUALITY | definition derivation full in detail in digital communication .
SCHWARZ INEQUALITY
Let us consider any pair of energy signals si(t) and s2(t). The Schwarz inequality states that
equation
The equality holds if and only if s2(t) = cs1(t), where c is any constant.
Proof:
            To prove this inequality, let s1(t) and s2(t) be expressed in terms of the pair of orthonormal basis function 1(t) and 2(t) as under:
s1(t)= s111(t) + s12 2(t)
s2(t) = s211(t) + s222(i)
where 1(t) and 2(t) satisfy the orthonormality conditions over the entire time interval (—, ), i.e.
equation
On this basis, we may represent the signals s1(t) and s2(t) by the following respective pair of vectors, as illustrated in figure 7.5.
s1 =
s2 =
From figure 7.5, we observe that angle  substended between the vectors s1 and s2 is
diagram
FIGURE 7.5 Vector representations of signals si(t) and s2(t), providing the background picture for proving the Schwarz inequality.
EQUATION
or                                                           EQUATION
where we have used equations (7.15), (7.13) and (7.9). Recognizing that
| cos  | ≤ 1, the Schwarz inequality of equation (7.16) immediately follows from equation (7.17). Moreover, from the first line of equation (7.17), it may be noted that | cos  | = 1 if and only if s2 = csi, i.e., s2(t) = s2(t) = cs1(t), where c is an arbitrary constant.
The proof of the Schwarz inequality applies to real-valued signals. It may be readily extended to complex-valued signals, in which case equation (7.16) us reformulated as under:
EQUATION
where the equality holds if and only if s2(t) = cs1(t), where c is a constant.
7.6 GRAM-SCHMIDT ORTHOGONALIZATION PROCEDURE (Expected)
After discussing geometric representation of energy signals, let us now discuss the Gram-Schmidt orthogonalization procedure for which we require a complete-orthonormal set of basis functions. To start, let us assume that we have a set of M energy signals represented by s1(t), s2(t), …sM(t).
If, we start with s1(t), which chosen from this set arbitrarily, the first basis function is defined as
where              E1 = energy of the signal s1(t)
From equation (7.19), we write
s1(t) =
or                                          s1(t) = s11
where coefficient s11 =
has unit energy, as required.
Next, using the signal s2(t), we can define the coefficient s21 as
equation
Therefore, we can introduce a new intermediate function
g2(t) =s2(t) – s211 (t)                                       …(7.22)
which is orthogonal to 1(t) over the internal 0 ≤ t ≤ T by equation (7.21) and by the fact that the basis function 1(t) has unit energy.
Now, we can define the second basis function as
equation
Now, substituting equation (7.22) into equation (7.23) and simplifying, we get the desired result as under:
equation
where E2 is the energy of the signal s2(t).
From equation (7.23), it is obvious that
and, from equation (7.24), it is obvious that
This means that 1(t) and 2(t) form an orthonormal pair, as required.
Depending upon the above discussion, we may write the general form as under:
equation
where the coefficients sij are defined as
equation
It may be noted that equation (7.22) is a special case of equation (7.25) with i = 2. Further, for i = 1, the function gi(t) reduces to si(t).
Given the gi(t), now, we can define the set of basis functions as under:
equation
which forms on orthonormal set.
Further, the dimension N is less than or equal to the number of given signals, M depending on one of two possibilities:
(i)         The signals s1(t) s2(t), …,sM(t) form a linearly independent set, in which case N = M.
(ii)        The signals s1(t), s2(t), …, sM(t) are not linearly independent, in which case, N < M, and the intermediate function gi(t) is zero for i > N.
NOTE : It may be noted that the conventional Fourier series expansion or a periodic signals is an example of a particular expansion of this type. Also, the representation of a bandlimited signal in terms of its samples taken at the Nyquist rate may be viewed as another example of a particular expansion of this type. There are, however, two [important following distinctions that should be made:
(i)         The form of the basis function 1(t), 2(t) …N(t) has not been specified. This means that unlike the Fourier series expansion of a periodic signal or the sampled representation of a band-limited signal, we have not restricted the Gram-Schmidt orthogonalization procedure to be in terms of sinusoidal functions or sinc functions of time.
(ii)        The expansion of the signal si(t) in terms of a finite number of terms is not an approximation wherein only the the first N terms are significant but rather an exact expression where N and only N terms are significant.
EXAMPLE 7.1. Given the signals s1(t), s2(t), s3(t) and s4(t) shown in figure 7.6(α). Use the Gram-Schmidt orthogonalization procedure to find an orthonormal basis for the set of signals.                                           (Very Important)
Solution: Step 1: Let us note that the energy of signal s1(t) is given by
equation
Therefore, the first basis function 1(t) will be
equation
Step 2: We know that
equation
The energy of signal s2(t) is
equation
DIAGRAM
FIGURE 7.6 (a) Set of signals to be orthonormalized
diagram
FIGURE 7.6 (b) The resulting set of orthonormal functions.
Therefore, the second basis function 2(t) will be
equation
or                                               equation
Step 3: Again, we know that
equation
and the coefficient s32 equals
EQUATION
or                                              EQUATION
The corresponding value of the intermediate function g1(t), with i = 3 is, therefore
g3(t) = s3(t) – s31 1 (t) – s32 2(t)
EQUATION
Further, we know that the third basis function 3(t) will be expressed as
equation
Finally, with i = 4, we find that g4(t)= 0 and the orthogonalization process is completed.
The three basis functions 1(t), 2(t), and 3(t) form an .orthonormal set, as shown in figure 7.6(b). In this example, we thus have M = 4 and N = 3, which means that the four signals s1 (t), s2(t), s3(t) and s4(t) described in figure 7.6(α) do not form a linearly independent set. This is readily confirmed by nothing that s4(t) = s1(t) + s3(t). Moreover, we note that any of these four signals can be expressed as a linear combination of the three basis functions, which is essence of the Gram-Schmidt orthogonalization procedure.
■          This signal si(t) occupies the full duration T allotted to symbol mi. Further, si(t) is a real-valued energy signal as given by
Energy Ei =  (t) dt, for i = 1, 2, …M
■          Now, the charnel is assumed to have following two characteristics:
(i)         the channel is linear with a bandwidth that is wide enough to accommodate the transmission of signal si(t) with negligible or no distortion.
(ii)        the channel noise, (t), is the simple function of a zero-mean white Gaussian noise process. At this stage, it may be noted that the reasons for second assumption are that it makes receiver calculations very easy and also, it is reasonable description of the type of noise present in several practical communication systems. Such a channel is popularly known as an additive white Gaussian noise (AWGN) channel.
■          The requirement is to design the receiver so as to minimize the average probability of symbol error.
■          This average probability of symbol error may be defined as
where,                         mi = transmitted symbol,
= estimate produced by the receiver, and
= the conditional error probability given that the ith symbol was sent.
■          In geometric representation of signals, we represent any set of M energy signals {si(t)} as linear combinations of N orthonormal basis functions, where N ≤ M.
This means that given a set of real-valued signals s1(t), s2(t), …, sM(t), each of duration T seconds.
■          Each signal in the set {si(t)} is completely determined by the vector of its coefficients.
si = , i = 1, 2, …,M
Here, the vector si is called a signal vector. Also, if we extend the conventional notion of two and three dimensional Euclidean spaces to an N-dimensional Euclidean space, the set of signal vectors {si | i = 1, 2, … M} may be viewed as defining a corresponding set of M points in an N-dimensional. Euclidean space, with mutually perpendicular axes labeled 1, 2, …, N.
In fact, this N-dimensional Euclidean space is called the signal space.
■          The idea of visualizing a set of energy signals geometrically is of utmost importance. In fact, it provides the mathematical basis for the geometric representation of energy signals and hence giving the way for the noise analysis of digital communication systems in a much satisfied manner.

Leave a Reply

Your email address will not be published. Required fields are marked *