Linear Algebra (4)

Linear Algebra (4)

Inner Product Spaces

If \(\lambda\) is a complex number, then we define \(\lambda \geq 0\) to be real number and nonnegative.

Inner Products and Norms

For \(z \in \mathbb{F}^n\), we define the norm of \(z\) w.r.t Euclidian inner product by:

\[\|z\| = \sqrt{|z_1|^2 + ... + |z_n|^2}\]

Where \(|z_1|^2 = z\bar{z} = a^2 + b^2\)

Definition 6.2: Dot Product

For \(x, y \in \mathbb{R}^n\), the dot product of \(x\) and \(y\), denoted \(x \cdot y\), is defined by:

\[x \cdot y = x_1y_1 + ... + x_ny_n\]

where \(x=(x_1, ..., x_n), y=(y_1, ..., y_n)\)

It has the following properties:

  1. \(x \cdot x \geq 0, \; \forall x \in \mathbb{R}^n\)
  2. \(x \cdot x = 0\) IFF \(x = 0\)
  3. For \(y \in \mathbb{R}^n\) fixed, the map from \(\mathbb{R}^n \rightarrow \mathbb{R}\) that sends \(x \in \mathbb{R}^n\) to \(x \cdot y\) is linear.
  4. \(x \cdot y = y \cdot x, \; \forall x, y \in \mathbb{R}^n\)

Definition 6.3: Inner Product

An inner product on \(V\) is a function that takes each ordered pair \((u, v)\) of elements of \(V\) to a number \(<u, v> \in \mathbb{F}\) and has the following properties:

  1. Positive: \[<v, v> \geq 0 , \; \forall v \in V\]
  2. Definiteness: \[<v, v> = 0 \;\; \text{ IFF } \;\; v = 0\]
  3. Additivity in first slot: \[<u + v, w> = <u, w> + <v, w>, \; \forall u, v, w \in V\]
  4. Homogeneity in first slot: \[<\lambda u, v> = \lambda <u, v>, \; \forall \lambda \in \mathbb{F}, u,v \in V\]
  5. Conjugate Symmetry: \[<u, v> = \overline{<v, u>}, \; \forall u, v \in V\]

The Euclidean inner product on \(\mathbb{F}^n\) is defined by: \[<(w_1, ..., w_n), (z_1, ..., z_n)> = w_1 \bar{z}_1 + .... + w_n \bar{z}_n\]

If \(c_1, ..., c_n\) are positive numbers, then an inner product can be defined on \(\mathbb{F}^n\) by: \[<(w_1, ...., w_n), (z_1, ..., z_n)> = c_1w_1 \bar{z}_1 + .... + c_nw_n \bar{z}_n\]

An inner product can be defined on the vector space of continuous real-valued functions on the interval \([-1, 1]\) by: \[<f, g> = \int^1_{-1} f(x)g(x) dx\]

An inner product can be defined on \(P(\mathbb{R})\) by: \[<p, q> = \int^{\infty}_{0} p(x) q(x) e^{-x} dx\]

Definition 6.5: Inner Product Space

An inner product space is a vector space \(V\) along with an inner product on \(V\).


For the rest of the Inner product space chapter, \(V\) denotes inner product space over \(\mathbb{F}\). If the inner product on \(V\) is missing from the context, we assume it to be Euclidean inner product if the vector space is \(\mathbb{F}^n\).


Theorem 6.7: Basic Properties of an Inner Product

  1. For each fixed \(u \in V\), the function that takes \(v\) to \(<v, u>\) is a linear map from \(V\) to \(\mathbb{F}\).
  2. \(<0, u> = 0\) for every \(u \in V\).
  3. \(<u, 0> = 0\) for every \(u \in V\).
  4. \(<u, v + w> = <u, v> + <u, w>\) for all \(u, v, w \in V\).
  5. \(<u, \lambda v> = \bar{\lambda} <u, v> \; \forall \lambda \in \mathbb{F}, u, v \in V\).

Definition 6.8: Norm, \(\| v \|\)

For \(v \in V\), the norm of \(v\), denoted \(\|v\|\), is defined by:

\[\|v\| = \sqrt{<v, v>}\]

Theorem 6.10: Basic Properties of the Norm

Suppose \(v \in V\):

  1. \(\|v\| = 0\), if and only if \(v = 0\).
  2. \(\|\lambda v\| = |\lambda| \|v\|, \; \forall \lambda \in \mathbb{F}\).

Definition 6.11: Orthogonal

Two vectors \(u, v \in V\) are called orthogonal if \(<u, v> = 0\).

Theorem 6.12: Orthogonality and \(0\)

  1. \(0\) is orthogonal to every vector in \(V\).
  2. 0 is the only vector in \(V\) that is orthogonal to itself.

Theorem 6.13 Pythagorean Theorem

Suppose \(u\) and \(v\) are orthogonal vectors in \(V\). Then:

\[\|u + v\|^2 = \|u\|^2 + \|v\|^2\]

Theorem 6.14: An Orthogonal Decomposition

Suppose \(u, v \in V\), with \(v \neq 0\). Set \(c = \frac{<u, v>}{\|v\|^2}\) and \(w = u - \frac{<u, v>}{\|v\|^2} v\). Then:

\[<w, v> = 0\]

and

\[u = cv + w\]

Theorem 6.15: Cauchy-Schwarz Inequality

Suppose \(u, v \in V\). Then:

\[|<u, v>| \leq \|u\|\|v\|\]

This inequality is an equality if and only if one of \(u, v\) is a scalar multiple of the other.

Proof of theorem 6.15:

If \(v = 0\), we have \(0 = 0\), thus, we can assume \(v \neq 0\). Then, by theorem 6.14, we have:

\[u = \frac{<u, v> v}{\|v\|^2} + w\]

Since \(v, w\) are orthogonal, by taking the norm square on both side we have:

\[\|u\|^2 = \|\frac{<u, v> v}{\|v\|^2}\|^2 + \|w\|^2\]

Since \(\frac{<u, v>}{\|v\|^2} \in \mathbb{F}\), we can take them out:

\[\|u\|^2 = \frac{|<u, v>|^2}{\|v\|^4}\|v\|^2 + \|w\|^2 \geq \frac{|<u, v>|}{\|v\|^2}\]

Multiply both sides by \(\|v\|^2\), we have:

\[\|w\|^2\|v\|^2 \geq |<u, v>|\]

Notice that, the equality only happens when \(\|w\|^2 = 0\), in other words, \(w = u - \frac{<u, v>}{\|v\|^2} v = 0 \implies u = \frac{<u, v>}{\|v\|^2} v\), thus, if \(u\) is scalar multiple of \(v\).

Theorem 6.18: Triangle Inequality

Suppose \(u, v \in V\). Then

\[\|u + v\| \leq \|u\| + \|v\|\]

This inequality is an equality if and only if one of \(u, v\) is a nonnegative multiple of the other.

Theorem 6.22 Parallelogram Equality

Suppose \(u, v \in V\). Then:

\[\|u + v\|^2 + \|u - v\|^2 = 2 (\|u\|^2 + \|v\|^2)\]

Theorem: The Polarization Identities

Polarization Identities

Orthonormal Bases

Definition 6.23: Orthonormal

A list of vectors is called orthonormal if each vector in the list has norm \(1\) and is orthogonal to all the other vectors in the list. In other words, a list \(e_1, ..., e_m\) of vectors in \(V\) is orthonormal if

\[ <e_j, e_j> = \begin{cases} 1, \quad \text{if} \;\; k = j\\ 0, \quad \text{if} \;\; k \neq j \end{cases} \]

The standard basis in \(\mathbb{F}^n\) w.r.t Euclidean inner product is an orthonormal list.

Theorem 6.25: The Norm of an Orthonormal Linear Combination

If \(e_1, ..., e_m\) is an orthonormal list of vectors in \(V\), then:

\[\|a_1e_1 + ... + a_m e_m\|^2 = |a_1|^2 + ... + |a_m|^2\]

for all \(a_1, ..., a_m \in \mathbb{F}\)

Theorem 6.26: An Orthonormal List is linearly Independent

Every orthonormal list of vectors is linearly independent.

Definition 6.27: Orthonormal Basis

An orthonormal basis of \(V\) is an orthonormal list of vectors in \(V\) that is also a basis of \(V\).

Theorem 6.28: An Orthonormal List of the Right Length is an Orthonormal Basis

Every orthonormal list of vectors in \(V\) with length \(\dim V\) is an orthonormal basis of \(V\).

Theorem 6.30: Writing a Vector as Linear Combination of Orthonormal Basis

Suppose \(e_1, ..., e_n\) is an orthonormal basis of \(V\) and \(v \in V\). Then:

\[v = <v, e_1> e_1 + ... + <v, e_n>e_n\]

and

\[\|v\|^2 = |<v, e_1>|^2 + ... + |<v, e_n>|^2\]

Proof of Theorem 6.30

Since \(e_1, ...., e_n\) is an orthonormal basis, there exists a list of scalars \(a_1, ..., a_n\) s.t:

\[v = a_1e_1 + ... + a_ne_n\]

Take the inner product of \(e_j\) on both sides, we have:

\[<v, e_j> = a_j\]

Theorem 6.31: Gram-Schmidt Procedure

Suppose \(v_1, ..., v_m\) is a linearly independent list of vectors in \(V\). Let \(e_1 = \frac{v_1}{\|v_1\|}\). For \(j=2, ..., m\), define \(e_j\) inductively by:

\[e_j = \frac{v_j - <v_j, e_1> e_1 - ... - <v_j, e_{j-1}>e_{j-1}}{\|v_j - <v_j, e_1> e_1 - ... - <v_j, e_{j-1}>e_{j-1}\|}\]

Then \(e_j, ..., e_m\) is an orthonormal list of vectors in \(V\) s.t:

\[span(v_1, ..., v_j) = span(e_1, ..., e_j)\]

for \(j = 1, ..., m\)

Theorem 6.34: Existence of Orthonormal Basis

Every finite-dimensional inner product space has an orthonormal basis.

Theorem 6.35: Orthonormal List Extends to Orthonormal Basis

Suppose \(V\) is finite-dimensional. Then every orthonormal list of vectors in \(V\) can be extended to an orthonormal basis of \(V\).

Theorem 6.37: Upper-Triangular Matrix w.r.t Orthonormal Basis

Suppose \(T \in L(V)\). If \(T\) has an upper-triangular matrix w.r.t some basis of \(V\), then \(T\) has an upper-triangular matrix w.r.t some orthonormal basis of \(V\).

Proof Theorem 6.37:

Assume \(v_1, ..., v_n\) is a basis s.t \(T\) has an upper triangular matrix, then by theorem 5.26, we have \(span(v_1, .., v_j)\) is invariant under \(T\) for each \(j=1\). Then by Gram-Schmidt Procedure, we can construct \(e_1, ..., e_n\) from \(v_1, ..., v_n\) s.t

\[span(e_1, ..., e_j) = span(v_1, ..., v_j)\]

Then, by theorem 5.26, we can conclude the \(e_1, ..., e_n\) is a basis s.t \(T\) has an upper triangular matrix.

Theorem 6.38: Schur's Theorem

Suppose \(V\) is a finite-dimensional complex vector space and \(T \in L(V)\). Then \(T\) has an upper-triangular matrix w.r.t some orthonormal basis of \(V\).

Theorem 6.42: Riesz Representation Theorem

Suppose \(V\) is finite-dimensional and \(\psi\) is a linear functional on \(V\). Then there is a unique vector \(u \in V\) that does not depend on the choice of basis s.t

\[\psi (v) = <v, u>\]

for every \(v \in V\).

In other words, any linear functional on \(V\) can be written as the map that sends \(v \in V\) to inner product.

Proof Theorem 6.42

Let \(e_1, ..., e_n\) be an orthonormal basis of \(V\). Then:

\[\begin{aligned} \psi (v) &= \psi (<v, e_1> e_1 + ... + <v, e_n> e_n) \\ &= <v, e_1> \psi(e_1) + ... + <v, e_n>\psi(e_n)\\ &= <v, \overline{\psi(e_1)} e_1 + .... + \overline{\psi(e_n)} e_n> \end{aligned}\]

Let \(u = \overline{\psi(e_1)} e_1 + .... + \overline{\psi(e_n)} e_n\), then we have:

\[\psi(v) = <v, u>\]

We now show that \(u\) is unique. Suppose \(\psi (v) = <v, u_1 > = <v, u_2>\), then:

\[<v, u_1> - <v, u_2> = 0\]

Thus, we have \(\forall v \in V\):

\[<v, u_1 - u_2> = 0\]

This shows that \(u_1 = u_2\).

Orthogonal Complements and Minimization Problems

Definition 6.45: Orthogonal Complement, \(U^{\perp}\)

If \(U\) is a subset of \(V\), then the orthogonal complement of \(U\), denoted \(U^{\perp}\) is the set of all vectors in \(V\), that are orthogonal to every vector in \(U\):

\[U^{\perp} = \{v \in V: <v, u> = 0; \forall u \in U\}\]

Theorem 6.46 Basic Properties of Orthogonal Complement

  1. If \(U\) is a subset of \(V\), then \(U^{\perp}\) is a subspace of \(V\).
  2. \(\{0\}^{\perp} = V\).
  3. \(V^{\perp} = \{0\}\).
  4. If \(U\) is a subset of \(V\), then \(U \cap U^{\perp} \subseteq \{0\}\). (empty set is subset of every set)
  5. If \(U\) and \(W\) are subsets of \(V\) and \(U \subseteq W\), then \(W^{\perp} \subseteq U^{\perp}\).
Proof of Theorem 6.46:

1 to 4 are trivial.

  1. Suppose \(U, W\) are subsets of \(V\) and \(U \subseteq W\). Suppose \(v \in W^{\perp}\), then \(\forall u \in W\), we have \(<u, v> = 0\). Thus, we have \(\forall u \in U\), \(<v, u> = 0\), so \(v \in U^{\perp}, \forall v\in W^{\perp}\). Hence, \(W^{\perp} \subseteq U^{\perp}\).

Theorem 6.47: Direct Sum of a Subspace and Its Orthogonal Complement

Suppose \(U\) is finite-dimensional subspace of \(V\), then:

\[V = U \oplus U^{\perp}\]

Proof of Theorem 6.47:

We first show that:

\[V = U + W\]

Let \(U, W\) be a subspace of \(V\) and \(e_1, ..., e_m\) be orthonormal basis of \(U\). Then, \(\forall v \in V\) by theorem 6.30:

\[v = \underbrace{(v - <v, e_1>e_1 + ... + <v, e_m>e_m)}_{w} + \underbrace{(<v, e_1>e_1 + ... + <v, e_m>e_m)}_{u}\]

Let \(u, w\) be defined as above, we clearly have \(u \in U\). Thus:

\[<w, e_j> = <v - <v, e_1>e_1 + ... + <v, e_m>e_m, e_j> = <v, e_j> - <v, e_j> = 0\]

So, we have \(\forall u \in U\), \(<w, u> = 0\), so \(w \in U^{\perp} = W\). Thus, \(\forall v \in V\) we have:

\[v = u + w\]

and \(U, W\) are subspaces of \(V\).

Since \(U \cap U^{\perp} \subseteq \{0\}\) and \(U\) is a subspace so we have \(U \cap U^{\perp} = \{0\}\), by theorem 1.45, we have:

\[V = U \oplus W = U \oplus U^{\perp}\]

Theorem 6.50 Dimension of the Orthogonal Complement

Suppose \(V\) is finite-dimensional and \(U\) is a subspace of \(V\). Then:

\[\dim U^{\perp} = \dim V - \dim U\]

Theorem 6.51: The Orthogonal Complement of the Orthogonal Complement

Suppose \(U\) is a finite-dimensional subspace of \(V\), then:

\[U = (U^{\perp})^{\perp}\]

Definition 6.53: Orthogonal Projection, \(P_U\)

Suppose \(U\) is a finite-dimensional subspace of \(V\). For \(v \in V\), write \(v = u + w\), where \(u \in U, w \in U^{\perp}\), the orthogonal projection of \(V\) onto \(U\) is the operator \(P_U \in L(V)\) defined as:

\[P_{U} (v) = u\]

Suppose \(v \in V\). Let \(x \in V, x \neq 0\) and \(U = span(x)\), then:

\[v = (\frac{<v, x>}{\|x\|^2} x) + (v - \frac{<v, x>}{\|x\|^2} x)\]

The first term is in \(U\) and the second therm is in \(U^{\perp}\), thus

\[P_U(v) = \frac{<v, x>}{\|x\|^2} x\]

Theorem 6.55: Properties of the Orthogonal Projection \(P_{U}\)

Suppose \(U\) is a finite-dimensional subspace of \(V\) and \(v \in V\). Then:

  1. \(P_U \in L(V)\).
  2. \(P_U (u) = u, \; \forall u \in U\).
  3. \(P_u (w) = 0, \; \forall w \in U^{\perp}\).
  4. \(\text{range } P_{U} = U\).
  5. \(\text{null } P_U = U^{\perp}\).
  6. \(v - P_U(v) \in U^{\perp}\)
  7. \(P^2_U = P_U\).
  8. \(\|P_U (v)\| \leq \|v\|\).
  9. For every orthonormal basis \(e_1, ..., e_m\) of \(U\), \[P_U (v) = <v, e_1>e_1 + .... + <v, e_m>e_m\]

Theorem 6.56 Minimizing the Distance to a Subspace

Suppose \(U\) is a finite-dimensional subspace of \(V\), \(v \in V\) and \(u \in U\). Then:

\[\|v - P_U (v)\| \leq \|v - u\|\]

Furthermore, the inequality above is an equality iFF \(u = P_U(v)\).

Notice there, \(u\) and \(P_U (v)\) might be different.

Operators on Inner Product Spaces

Self-Adjoint and Normal Operators

Definition 7.2: Adjoint, \(T^*\)

Suppose \(T \in L(V, W)\). The adjoint of \(T\) is the function \(T^*: W \rightarrow V\) s.t:

\[<T(v), w> = <v, T^*(w)>\]

for every \(v \in V\) and every \(w \in W\).

To see why this definition makes sense, let \(T: V \rightarrow \mathbb{F}\) be a linear functional on \(V\) defined as:

\[T(v) = <T(v), w>\]

for some \(w \in W\). By theorem 6.42, there exists a unique element \(v_2\) in \(V\) s.t for every element \(v \in V\):

\[T(v) = <T(v), w> = <v, v_2>\]

We call this unique vector \(T^* (w)\).

Define \(T: \mathbb{R}^3 \rightarrow \mathbb{R}^2\) by: \[T(x_1, x_2, x_3) = (x_2 + 3x_3, 2x_1)\]

Here \(T^*: \mathbb{R}^2 \rightarrow \mathbb{R}^3\) can be found by:

\[<(x_1, x_2, x_3), T^*(y_1, y_2)> = <T(x_1, x_2, x_3), (y_1, y_2)> = <(x_1, x_2, x_3), (2y_2, y_1, 3y_1)>\]

Where \((y_1, y_2) \in \mathbb{R}^2, (x_1, x_2, x_3) \in \mathbb{R}^3\).

Theorem 7.5: The Adjoint is a Linear Map

If \(T \in L(V, W)\), then \(T^* \in L(W, V)\).

Theorem 7.6: Properties of the Adjoint

  1. \((S + T)^* = S^* + T^*, \; \forall S, T \in L(V, W)\)
  2. \((\lambda T)^* = \bar{\lambda} T^*, \; \forall \lambda \in \mathbb{F}, T \in L(V, W)\)
  3. \((T^*)^* = T\)
  4. \(I^* = I\), where \(I\) is the identity operator on \(V\).
  5. \((ST)^* = T^* S^*, \; \forall T \in L(V, W), S \in L(W, U)\)

Theorem 7.7: Null Space and Range of \(T^*\)

Suppose \(T \in L(V, W)\). Then:

  1. \(\text{null } T^* = (\text{range }T)^{\perp}\).
  2. \(\text{range }T^* = (\text{null } T)^{\perp}\).
  3. \(\text{null }T = (\text{range }T^*)^{\perp}\).
  4. \(\text{range }T = (\text{null }T^*)^{\perp}\).

Definition 7.8: Conjugate Transpose

The conjugate transpose of an \(m \times n\) matrix is the \(n \times m\) matrix obtained by interchanging the rows and columns and then taking the complex conjugate of each entry. If \(\mathbb{F} = \mathbb{R}\), then the conjugate transpose of a matrix is the same as its transpose.

Theorem 7.10: The Matrix of \(T^*\)

Let \(T \in L(V, W)\). Suppose \(e_1, ..., e_n\) is an orthonormal basis of \(V\) and \(f_1, ..., f_m\) is an orthonormal basis of \(W\). Then:

\[M(T^*, (f_1, ..., f_m), (e_1, ..., e_n))\]

is the conjugate transpose of:

Notice that, the result above only applies when we have orthonormal bases, for non-orthonormal bases, the matrix \(T^*\) does not necessarily equal the conjugate transpose of the matrix of \(T\).

Definition 7.11: Self-Adjoint (Hermitian)

An operator \(T \in L(V)\) is called self-adjoint if \(T = T^*\). In other words, \(T \in L(V)\) is self-adjoint if and only if:

\[<T(v), w> = <v, T(w)>\]

for all \(v, w \in V\).

Adjoint on \(L(V)\) plays a role similar to complex conjugation on \(\mathbb{C}\) and self-adjoint operator is analogous to a real number. Sometimes, people use symmetric for the matrix of self-adjoint operator with real entries.

Theorem 7.13: Eigenvalues of Self-Adjoint Operators are Real

Every eigenvalue of a self-adjoint operator is real.

Theorem 7.14:

Suppose \(V\) is a complex inner product space and \(T \in L(V)\). Suppose:

\[<T(v), v> = 0\]

for all \(v \in V\). Then \(T = 0\).

This result is not true for real value space.

Theorem 7.15: Over \(\mathbb{C}\), \(<T(v), v>\) is real for all \(v\) only for self-adjoint operators

Suppose \(V\) is a complex inner product space and \(T \in L(V)\). Then \(T\) is self-adjoint if and only if:

\[<T(v), v> \in \mathbb{R}\]

for every \(v \in V\)

Theorem 7.16: If \(T = T^*\) and \(<T(v), v> = 0; \forall v\), Then \(T = 0\)

Suppose \(T\) is self-adjoint operator on \(V\) s.t:

\[<Tv, v> = 0\]

for all \(v \in V\). Then \(T = 0\).

Definition 7.18: Normal

An operator on an inner product space is called normal if it commutes with its adjoint. In other words, \(T \in L(V)\) is normal if \[TT^* = T^*T\]

Every self-adjoint operator is normal

Theorem 7.20: \(T\) is Normal IFF \(\|T(v)\| = \|T^*(v)\| \; \forall v\)

An operator \(T \in L(V)\) is normal if and only if:

\[\|T(v)\| = \|T^*(v)\|\]

for all \(v \in V\).

Theorem 7.21: For \(T\) normal, \(T, T^*\) Have the Same Eigenvectors

Suppose \(T \in L(V)\) is normal and \(v \in V\) is an eigenvector of \(T\) with eigenvalue \(\lambda\). Then \(v\) is also an eigenvector of \(T^*\) with eigenvalue \(\bar{\lambda}\).

Theorem 7.22: Orthogonal Eigenvectors for Normal Operators

Suppose \(T \in L(V)\) is normal. Then eigenvectors of \(T\) corresponding to distinct eigenvalues are orthogonal.

The Spectral Theorem

Theorem 7.24: Complex Spectral Theorem

Suppose \(\mathbb{F} = \mathbb{C}\) and \(T \in L(V)\). Then the following are equivalent:

  1. \(T\) is normal.
  2. \(V\) has an orthonormal basis consisting of eigenvectors of \(T\).
  3. \(T\) has a diagonal matrix with respect to some orthonormal basis of \(V\).
Proof Theorem 7.24

Suppose 2 holds, then \(T\) has a diagonal matrix, then \(T^*\) has a diagonal matrix, since diagonal matrices commute, we have 1.

Suppose 3 holds, similarly 1 holds.

Suppose 1 holds, so \(T\) is normal. by theorem 6.38, there is an orthonormal basis \(e_1, ..., e_n\) of \(V\) w.r.t which \(T\) has an upper-triangular matrix. Thus, have:

\[\|T(e_1)\|^2 = |a_{1, 1}|^2\]

\[\|T^*(e_1)\|^2 = |\overline{a_{1, 1}}|^2 + .... + |\overline{a_{1, n}}|^2 = |a_{1, 1}|^2 + .... + |a_{1, n}|^2\]

Since \(\|T^*(e_1)\|^2 = \|T(e_1)\|^2\), we have all entries equal 0 except possibly \(a_{1, 1}\). Repeat this procedure for second column, we have 3.

Theorem 7.26: Invertible Quadratic Expressions

Suppose \(T \in L(V)\) is self-adjoint and \(b, c \in \mathbb{R}\) are such that \(b^2 < 4c\). Then:

\[T^2 + bT + cI\]

is invertible.

Theorem 7.27: Self-Adjoint Operators Have Eigenvalues

Suppose \(V \neq \{0\}\) and \(T \in L(V)\) is a self-adjoint operator. Then \(T\) has an eigenvalue.

Theorem 7.28: Self-Adjoint Operators and Invariant Subspaces

Suppose \(T \in L(V)\) is self-adjoint and \(U\) is a subspace of \(V\) that is invariant under \(T\). Then

  1. \(U^{\perp}\) is invariant under \(T\).
  2. \(T|_{U} \in L(U)\) is self-adjoint.
  3. \(T|_{U^{\perp}} \in L(U^{\perp})\) is self-adjoint.

Theorem 7.29: Real Spectral Theorem

Suppose \(\mathbb{F} = \mathbb{R}\) and \(T \in L(V)\). Then the following are equivalent:

  1. \(T\) is self-adjoint (Symmetric).
  2. \(V\) has an orthonormal basis consisting of eigenvectors of \(T\).
  3. \(T\) has a diagonal matrix w.r.t to some orthonormal basis of \(V\).

Positive Operators and Isometries

Definition 7.31: Positive Operator (Positive Semi-definite)

An operator \(T \in L(V)\) is called positive if \(T\) is self-adjoint and

\[<T(v), v> \geq 0\]

for all \(v \in V\).

If \(V\) is a complex vector space, then the requirement that \(T\) is self-joint can be dropped because we require \(<T(v), v> \in \mathbb{R}\).

If U is a subspace of \(V\), then the orthogonal projection \(P_U\) is a positive operator because: \[<P_U (v), v> = <u, u + w> = \|u\|^2 \geq 0\]

At the same time, \(P_U\) is a self-adjoint operator because the matrix of \(P_U\) is identity matrix w.r.t orthonormal basis \(e_1, ..., e_m\).

Definition 7.33: Square Root

An operator \(R\) is called a square root of an operator \(T\) if \(R^2 = T\).

If \(T \in L(\mathbb{F}^3)\) is defined by \(T(z_1, z_2, z_3) = (z_3, 0, 0)\), then the operator \(R \in L(\mathbb{F}^3)\) defined by \(R(z_1, z_2, z_3) = (z_2, z_3, 0)\) is a square root of \(T\).

Theorem 7.35: Characterization of Positive Operators

Let \(T \in L(V)\). Then the following are equivalent:

  1. \(T\) is positive.
  2. \(T\) is self-adjoint and all the eigenvalues of \(T\) are non-negative.
  3. \(T\) has a positive square root.
  4. \(T\) has a self-adjoint square root.
  5. There exists an operator \(R \in L(V)\) s.t \(T = R^*R\).

Theorem 7.36: Every Positive Operator Has Only One Positive Square Root

Every positive operator on \(V\) has a unique positive square root.

A positive operator can have infinitely many square roots, although only one of them can be positive.

Definition 7.37: Isometry

An operator \(S \in L(V)\) is called an isometry if:

\[\|S(v)\| = \|v\|\]

for all \(v \in V\)

In other words, an operator is an isometry if it preserves norms

Theorem 7.42: Characterization of Isometries

Suppose \(S \in L(V)\). Then the following are equivalent:

  1. \(S\) is an isometry.
  2. \(<S(u), S(v)> = <u, v>, \; \forall u, v \in V\).
  3. \(S(e_1) , ...., S(e_n)\) is orthonormal for every orthonormal list of vectors \(e_1, ..., e_n \in V\).
  4. There exists an orthonormal basis \(e_1, ..., e_n\) of \(V\) s.t \(S(e_1), ..., S(e_n)\) is orthonormal.
  5. \(S^*S = I\)
  6. \(SS^* = I\)
  7. \(S^*\) is an isometry.
  8. \(S\) is invertible and \(S^{-1} = S^*\).
  9. \(S\) is normal.

Theorem 7.43: Description of Isometries When \(\mathbb{F} = \mathbb{C}\)

Suppose \(V\) is a complex inner product space and \(S \in L(V)\). Then the following are equivalent:

  1. \(S\) is an isometry.
  2. There is an orthonormal basis of \(V\) consisting of eigenvectors of \(S\) whose corresponding eigenvalues all have absolute value \(1\).

Polar Decomposition and Singular Value Decomposition

Definition 7.44: \(\sqrt{T}\)

If \(T\) is a positive operator, then \(\sqrt{T}\) denotes the unique positive square root of \(T\).

Theorem 7.45: Polar Decomposition

Suppose \(T \in L(V)\). Then there exists an isometry \(S \in L(V)\) s.t:

\[T = S\sqrt{T^*T}\]

Definition 7.49: Singular Value Decomposition

Suppose \(T \in L(V)\). The singular values of \(T\) are the eigenvalues of \(\sqrt{T^* T}\), with each eigenvalue \(\lambda\) repeated \(\dim E(\lambda, \sqrt{T^*T})\) times.

The singular values of \(T\) are all non-negative, because they are the eigenvalues of the positive operator \(\sqrt{T^*T}\)

Define \(T \in L(\mathbb{F}^4)\): \[T(z_1, z_2, z_3, z_4) = (0, 3z_1, 2z_2, -3z_4)\]

a bit calculation, we have:

\[T^* (x_1, x_2, x_3, x_4) = (3x_2, 2x_3, 0, -3x_4)\]

Thus:

\[T^*T(z_1, z_2, z_3, z_4) = (9z_1, 4z_2, 0, 9z_4)\]

Then:

\[\sqrt{T^* T} (z_1, z_2, z_3, z_4) = (3z_1, 2z_2, 0, 3z_4)\]

The eigenvalues are \(3, 2, 0\) and: \[\dim E(3, \sqrt{T^*T}) = 2, \dim E(2, \sqrt{T^*T}) = 1, \dim E(0, \sqrt{T^*T}) = 1\]

Hence, the singular values of \(T\) are \(3, 3, 2, 0\)

Theorem 7.51: Singular Value Decomposition

Suppose \(T \in L(V)\) has singular values \(s_1, ..., s_n\). Then there exist orthonormal bases \(e_1, ..., e_n\) and \(f_1, ..., f_n\) of \(V\) s.t:

\[T(v) = s_1 <v, e_1>f_1 + .... + s_n<v, e_n>f_n\]

for every \(v \in V\).

Theorem 7.52: Singular Values Without Taking Square Root of an Operator

Suppose \(T \in L(V)\). Then the singular values of \(T\) are the non-negative square roots of the eigenvalues of \(T^*T\), with each eigenvalue \(\lambda\) repeated \(\dim E(\lambda, T^*T) times\).

Proof of Theorem 7.52:

Let \(e_1, ..., e_n\) be orthonormal basis of \(V\) s.t:

\[\sqrt{T^*T} (e_j) = s_j e_j\]

Then:

\[\sqrt{T^*T}\sqrt{T^*T} (e_j) = \sqrt{T^*T}(s_j e_j) = s^2_j e_j = T^*T(e_j)\]

Thus, \(s^2_j = \lambda_j\) which is the eigenvalue of \(T^*T(e_j)\).