Linear Algebra (5)

Linear Algebra (5)

Operators on Complex Vector Spaces

Null Spaces of Powers of an Operator

Theorem 8.2: Sequence of Increasing Null Spaces

Suppose \(T \in L(V)\). Then

\[\{0\} = \text{null }T^0 \subseteq \text{null }T^1 \subseteq ... \subseteq \text{null }T^k \subseteq .... \]

where \(T^0 = I\) is the identity mapping in \(V\).

Proof of Theorem 8.2:

Suppose \(k\) is non-negative integer and \(v \in \text{null }T^k\), then \(T^k(v) = 0 \implies T^{k+1}(v) = TT^k(v) = T(0) = 0\), thus, \(\forall v \in \text{null }T^k\), \(v \in \text{null }T^{k+1}\).


Theorem 8.3: Equality in the Sequence of Null Spaces

Suppose \(T \in L(V)\). Suppose \(m\) is a non-negative integer such that \(\text{null }T^m = \text{null }T^{m+1}\). Then:

\[\text{null } T^m = \text{null }T^{m+1} = \text{null }T^{m+2} = ...\]

Proof of Theorem 8.3:

Let \(k\) be a positive integer, we want to prove:

\[\text{null } T^{m+k} = \text{null } T^{m+k+1}\]

Since we know by theorem 8.2, \(\text{null } T^{m+k} \subseteq \text{null } T^{m+k+1}\), we only need to prove:

\[\text{null } T^{m+k+1} \subseteq \text{null } T^{m+k}\]

Let \(v \in \text{null } T^{m+k+1}\), then:

\[T^{m + 1}(T^k (v)) = T^{m+k+1} (v) = 0\]

Thus, \(T^k(v) \in \text{null } T^{m+1}\), since \(\text{null } T^m = \text{null } T^{m+1}\) by assumption, we have:

\[T^k(v) \in \text{null } T^m \implies T^m (T^k(v)) = T^{m+k} (v) = 0\]

Thus, \(\forall v \in \text{null } T^{m+k+1}\), we have \(v \in \text{null } T^{m+k}\).

Theorem 8.4: Null Spaces Stop Growing

Suppose \(T \in L(V)\). Let \(n = \dim V\). Then:

\[\text{null } T^n = \text{null } T^{n+1} = ....\]


Theorem 8.5: \(V\) is the Direct Sum of \(\text{null } T^{\dim V}\) and \(\text{range } T^{\dim V}\)

Suppose \(T \in L(V)\). Let \(n = \dim V\). Then:

\[V = \text{null } T^n \oplus \text{range } T^n\]

In general, \(V \neq \text{null } T \oplus \text{range }T\).

Proof of Theorem 8.5:

We first show that:

\[(\text{null } T^n) \cap (\text{range }T^n) = \{0\}\]

Where \(n = \dim V\). Let \(v \in (\text{null } T^n) \cap (\text{range }T^n)\), then \(T^n(v) = 0\) and \(\exists u \in V, T^n(u) = v\). Then we have:

\[T^n(T^n (u)) = T^n (v) = 0 \implies T^{2n} (u) = 0\]

By theorem 8.4, we have \(\text{null } T^{2n} = \text{null }T^n\), thus:

\[T^{n} (u) = v = 0\]

Thus, \(v = 0\).

Then by theorem 1.45, we have \((\text{null } T^n) \oplus (\text{range }T^n)\), then by theorem 3.22, we have:

\[\dim V = \dim\text{null } T^n + \dim\text{range }T^n = \dim \{(\text{null } T^n) \oplus (\text{range }T^n)\}\]


Definition 8.9: Generalized Eigenvector

Suppose \(T \in L(V)\) and \(\lambda\) is an eigenvalue of \(T\). A vector \(v \in V\) is called a generalized eigenvector of \(T\) corresponding to \(\lambda\) if \(v \neq 0\) and:

\[(T - \lambda I)^j (v) = 0\]

for some positive integer \(j\).

Although \(j\) is allowed to be an arbitrary integer in the equation, every generalized eigenvector satisfies this equation with \(j = \dim V\).


Definition 8.10: Generalized Eigenspace, \(G(\lambda, T)\)

Suppose \(T \in L(V)\) and \(\lambda \in \mathbb{F}\). The generalized eigenspace of \(T\) corresponding to \(\lambda\), denoted \(G(\lambda, T)\), is defined to be the set of all generalized eigenvectors of \(T\) corresponding to \(\lambda\) along with the \(0\) vector.

If \(T \in L(V)\) and \(\lambda \in \mathbb{F}\), then:

\[E(\lambda, T) \subseteq G(\lambda, T)\]

Generalized eigenspace is a subspace of \(V\) because null space is subspace of \(V\).


Theorem 8.11: Description of Generalized Eigenspaces

Suppose \(T \in L(V)\) and \(\lambda \in \mathbb{F}\). Then:

\[G(\lambda, T) = \text{null }(T - \lambda I)^{\dim V}\]

Proof of Theorem 8.11:

Suppose \(v \in \text{null }(T - \lambda I)^{\dim V}\), then \(v \in G(\lambda, T)\) by definition 8.10.

Suppose \(v \in G(\lambda, T)\), then there exists an integer \(j\) s.t \((T - \lambda I)^j (v) = 0\). If we let \(j = \dim V\), then by theorem 8.2, 8.4, we have:

\[v \in \text{null } (T - \lambda I)^{i}, \forall i \in \mathbb{Z}\]

Thus,

\[v \in G(\lambda, T)\]


Theorem 8.13: Linearly Independent Generalized Eigenvectors

Let \(T \in L(V)\). Suppose \(\lambda_1, ..., \lambda_m\) are distinct eigenvalues of \(T\) and \(v_1, ..., v_m\) are corresponding generalized eigenvectors. Then \(v_1, ..., v_m\) is linearly independent.


Definition 8.16: Nilpotent

An operator is called nilpotent if some power of it equals \(0\).


Theorem 8.18: Nilpotent Operator Raised to Dimension of Domain is \(0\)

Suppose \(N \in L(V)\) is nilpotent. Then \(N^{\dim V} = 0\).

Proof of Theorem 8.18

Since \(N\) is nilpotent, we have \(G(0, N) = V\), since for some integer \(j\), we have \(N^j = 0 \implies N^j(v) = 0, \; \forall v \in V\). Thus, by theorem 8.11, we have:

\[\text{null }(N)^{\dim V} = V\]

Thus, \(N^{\dim V} = 0\).

Theorem 8.19: Matrix of a Nilpotent Operator

Suppose \(N\) is a nilpotent operator on \(V\). Then there is a basis of \(V\) w.r.t which the matrix of \(N\) has the form:

\[ A = \begin{bmatrix} 0 & ... & *\\ . & ... & .\\ 0 & ... & 0\\ \end{bmatrix} \]

here all entries on and below the diagonal are \(0\).


Decomposition of an Operator

Theorem 8.20: The Null Space and Range of \(p(T)\) are Invariant Under \(T\)

Suppose \(T \in L(V)\) and \(p \in P(\mathbb{F})\). Then \(\text{null } p(T)\) and \(\text{range } p(T)\) are invariant under \(T\).

Proof of Theorem 8.21:
  1. \(\text{null } p(T)\) is invariant under \(T\):

    Suppose \(v \in \text{null } p(T)\), we want to show that \(p(T)(T(v)) = 0\). Thus, \(p(T) (v) = 0 \implies T(p(T)(v)) = T(0) = 0 = p(T) (T(v)) = 0\)

  2. \(\text{range } p(T)\) is invariant under \(T\):

    Suppose \(v \in \text{range } p(T)\), we want to show that \(T(v) = p(T)(u)\) for some \(u \in V\). Let \(v = p(T)(u)\), then \(T(v) = T(p(T)(u)) = p(T)(T(u))\), let \(u = T(u)\), we have the desired result.


Theorem 8.21: Description of Operators on Complex Vector Space

Suppose \(V\) is a complex vector space and \(T \in L(V)\). Let \(\lambda_1, ..., \lambda_m\) be the distinct eigenvalues of \(T\). Then:

  1. \(V = G(\lambda_1, T) \oplus ... \oplus G(\lambda_m, T)\).
  2. Each \(G(\lambda_j, T)\) is invariant under \(T\).
  3. Each \((T - \lambda_j I )|_{G(\lambda_j, T)}\) is nilpotent.


Theorem 8.23: A Basis of Generalized Eigenvectors

Suppose \(V\) is a complex vector space and \(T \in L(V)\). Then there is a basis of \(V\) consisting of generalized eigenvectors of \(T\).


Definition 8.24: Multiplicity

Suppose \(T \in L(V)\). The multiplicity of an eigenvalue \(\lambda\) of \(T\) is defined to be the dimension of the corresponding generalized eigenspace \(G(\lambda, T)\).

In other words, the multiplicity of an eigenvalue \(\lambda\) of \(T\) equals \(\dim \text{null }(T - \lambda I)^{\dim V}\).

The algebraic multiplicity of \(\lambda\) is defined above.

The geometric multiplicity of \(\lambda\) is defined to be the dimension of corresponding eigenspace:

\[\dim E(\lambda, T)\]


Theorem 8.26: Sum of the Multiplicities Equals \(\dim V\)

Suppose \(V\) is a complex vector space and \(T \in L(V)\). Then the sum of the multiplicities of all eigenvalues of \(T\) equals \(\dim V\).


Definition 8.27: Block Diagonal Matrix

A block diagonal matrix is a square matrix of the form:

\[ \begin{bmatrix} A_1 & ... & 0\\ . & ... & .\\ 0 & ... & A_m\\ \end{bmatrix} \]

Where \(A_1, ..., A_m\) are square matrices lying along the diagonal and all the other entries of the matrix equal 0. If \(A_1, ..., A_m\) are \(1 \times 1\) matrix, we have diagonal matrix.


Theorem 8.29: Block Diagonal Matrix with Upper-Triangular Blocks

Suppose \(V\) is a complex vector space and \(T \in L(V)\). Let \(\lambda_1, ..., \lambda_m\) be the distinct eigenvalues of \(T\), with multiplicities \(d_1, ..., d_m\). Then there is a basis of \(V\) w.r.t which \(T\) has a block diagonal matrix of the form:

\[ \begin{bmatrix} A_1 & ... & 0\\ . & ... & .\\ 0 & ... & A_m\\ \end{bmatrix} \]

Where each \(A_j\) is a \(d_j \times d_j\) upper-triangular matrix of the form:

\[ A_i = \begin{bmatrix} \lambda_j & ... & *\\ . & ... & .\\ 0 & ... & \lambda_j\\ \end{bmatrix} \]


Theorem 8.31: Identity Plus Nilpotent Has a Square Root

Suppose \(N \in L(V)\) is nilpotent. Then \(I + N\) has a square root.


Theorem 8.33: Over \(\mathbb{C}\), Invertible Operators Have Square Roots

Suppose \(V\) is a complex vector space and \(T \in L(V)\) is invertible. Then \(T\) has a square root.


Characteristic and Minimal Polynomials

Definition 8.34: Characteristic Polynomial (Complex Vector Space)

Suppose \(V\) is a complex vector space, and \(T \in L(V)\). Let \(\lambda_1, ..., \lambda_m\) denote the distinct eigenvalues of \(T\), with multiplicities \(d_1, ..., d_m\). The polynomial \(q\) defined as:

\[q(z) = (z - \lambda_1)^{d_1} ... (z - \lambda_m)^{d_m}\]

is called the characteristic polynomial of \(T\).


Theorem 8.36: Degree and Zeros of Characteristic Polynomial

Suppose \(V\) is a complex vector space and \(T \in L(V)\). Then:

  1. The characteristic polynomial of \(T\) has degree \(\dim V\).
  2. The zeros of the characteristic polynomial of \(T\) are the eigenvalues of \(T\).


Theorem 8.37: Cayley-Hamilton Theorem (This also works for real vector space, see theorem 9.24)

Suppose \(V\) is a complex vector space and \(T \in L(V)\). Let \(q\) denotes the characteristic polynomial of \(T\). Then \(q(T) = 0\).

Proof of Theorem 8.37:

Let \(\lambda_1, ..., \lambda_m\) be eigenvalues of \(T\) and \(d_1, ..., d_m\) be dimensions of the corresponding generalized eigenspace \(G(\lambda_1, T), ... ,G(\lambda_m, T)\). Then by theorem 8.18, we have:

\[(T - \lambda_j I)^{d_j}|_{G(\lambda_j, T)} = 0\]

Since \(V\) is direct sum of the generalized eigenspaces, we can write every \(v \in V\) as \(v = g_1, ...., g_m, \; g_1 \in G(\lambda_1, T), g_m \in G(\lambda_m, T)\) and:

\[q(T) (v) = q(T) (g_1) + .... + q(T)(g_m)\]

Thus, to show that \(q(T) = 0\), we only need to show that

\[q(T)|_{G(\lambda_j, T)} = 0, \; \forall j=1, ..., m\]

We have:

\[q(T) = (T - \lambda_1 I)^{d_1} .... (T - \lambda_m I)^{d_m}\]

For every \(g_1 \in G(\lambda_1, T)\), we can write (by theorem 5.20):

\[q(T) (g_1) = (T - \lambda_m I)^{d_m} .... ((T - \lambda_1 I)^{d_1} (g_1)) = 0\]


Definition 8.38: Monic Polynomial

A monic polynomial is a polynomial whose highest-degree coefficient equals \(1\).

\(z\)

\(2 + 6z^2 + z^7\)


Theorem 8.40: Minimal Polynomial

Suppose \(T \in L(V)\). Then there is a unique monic polynomial \(p\) of the smallest degree such that \(p(T) = 0\).


Definition 8.43: Minimal Polynomial

Suppose \(T \in L(V)\). Then the minimal polynomial of \(T\) is the unique monic polynomial \(p\) of the smallest degree such that \(p(T) = 0\)

From theorem 8.37, 8.36, we know that if \(V\) is a complex vector space, then \(p\) has degree at most \(\dim V\).


Theorem 8.46: \(q(T) = 0 \implies q\) is a Multiple of the Minimal Polynomial

Suppose \(T \in L(V)\) and \(q \in P(\mathbb{F})\). Then \(q(T) = 0\) IFF \(q\) is a polynomial multiple of the minimal polynomial of \(T\).


Theorem 8.48: Characteristic Polynomial is a Multiple of Minimal Polynomial (Complex Vector Space)

Suppose \(\mathbb{F} = \mathbb{C}\) and \(T \in L(V)\). Then the characteristic polynomial of \(T\) is a polynomial multiple of the minimal polynomial of \(T\).


Theorem 8.49: Eigenvalues are the Zeros of the Minimal Polynomial

Let \(T \in L(V)\). Then the zeros of the minimal polynomial of \(T\) are precisely the eigenvalues of \(T\).


Jordan Form

Theorem 8.55: Basis Corresponding to a Nilpotent Operator

Suppose \(N \in L(V)\) is nilpotent. Then there exist vectors \(v_1, ..., v_n \in V\) and non-negative integer \(m_1, ..., m_n\) s.t:

  1. \(N^{m_1}(v_1), ...., N(v_1), v_1, ..., N^{m_n}(v_n), ..., N(v_n)\) is a basis of \(V\).
  2. \(N^{m_1 + 1} (v_1) = ... = N^{m_n + 1}(v_n) = 0\)


Definition 8.59: Jordan Basis

Suppose \(T \in L(V)\). A basis of \(V\) is called jordan basis for \(T\) if with respect to this basis \(T\) has a block diagonal matrix"

\[ \begin{bmatrix} A_1 & ... & 0\\ . & ... & .\\ 0 & ... & A_m\\ \end{bmatrix} \]

where each \(A_j\) is an upper-triangular matrix of the form:

\[ A_j = \begin{bmatrix} \lambda_j & 1 & & 0\\ & & 1 & \\ & & & 1\\ 0 & & & \lambda_j\\ \end{bmatrix} \]

Theorem 8.60: Jordan Form

Suppose \(V\) is a complex vector space. If \(T \in L(V)\), then there is a basis of \(V\) that is a jordan basis for \(T\).


Operators on Real Vector Spaces

Complexification

Definition 9.2: Complexification of \(V, V_C\)

Suppose \(V\) is a real vector space.

  • The complexification of \(V\), denoted \(V_C\), equals \(V \times V\). An element of \(V_C\) is an ordered pair \(u, v\) where \(u, v \in V\), but we will write this as \(u + iv\).
  • Addition on \(V_C\) is defined by: \[(u_1 + iv_1) + (u_2 + iv_2) = (u_1 + u_2) + i(v_1 + v_2)\] for \(u_1, v_1, u_2, v_2 \in V\)
  • Complex scalar multiplication on \(V_C\) is defined by: \[(a + bi)(u + iv) = (au - bv) + i(av + bu)\]

We can think of \(V\) as a subset of \(V_C\) by identifying \(u \in V\) with \(u + i0\)


Theorem 9.3: \(V_C\) is a complex vector space

Suppose \(V\) is a real vector space. Then with the definition of addition and scalar multiplication above, \(V_C\) is a complex vector space.


Theorem 9.4: Basis of \(V\) is a basis of \(V_C\)

Suppose \(V\) is a real vector space.

  1. If \(v_1, ..., v_n\) is a basis of \(V\), then \(v_1, ..., v_n\) is a basis of \(V_C\).
  2. The dimension of \(V_C\) equals the dimension of \(V\) (\(\dim(V \times V) = \dim V + \dim V\) over same \(\mathbb{F}\), in this case, we extend from \(\mathbb{F} = \mathbb{R}\) to \(\mathbb{F} = \mathbb{C}\)).


Definition 9.5: Complexification of \(T, T_C\)

Suppose \(V\) is a real vector space and \(T \in L(V)\). The complexification of \(T\), denoted \(T_C\), is the operator \(T_C \in L(V_C)\) defined by:

\[T_C(u + iv) = T(u) + iT(v)\]

for \(u, v \in V\)

Suppose \(A\) is a \(n \times n\) matrix of real numbers. Define \(T \in L(\mathbb{R}^n)\) by \(T(x) = Ax\) where the element in \(\mathbb{R}^n\) are though of as \(n \times 1\) column vectors. Then, we have \(T_C(z) = Az\), where \(z \in \mathbb{C}^n\). Thus, we can think of \(T_C\) as matrix multiplication by the same \(A\) of \(T\) on \(\mathbb{R}^n\) that acts on larger domain \(\mathbb{C}^n\).


Theorem 9.7: Matrix of \(T_C\) equals Matrix of \(T\)

Suppose \(V\) is a real vector space with basis \(v_1, .., v_n\) and \(T \in L(V)\). Then \(M(T) = M(T_C)\), where both matrices are w.r.t the basis \(v_1, ..., v_n\).


Theorem 9.8: Every Operator Has an Invariant Subspace of Dimension 1 or 2

Every operator on a nonzero finite-dimensional vector space has an invariant subspace of dimension 1 or 2.


Theorem 9.10: Minimal Polynomial of \(T_C\) Equals Minimal Polynomial of \(T\)

Suppose \(V\) is a real vector space and \(T \in L(V)\). Then the minimal polynomial of \(T_C\) equals the minimal polynomial of \(T\).


Theorem 9.11: Real Eigenvalues of \(T_C\)

Suppose \(V\) is a real vector space, \(T \in L(V)\), and \(\lambda \in \mathbb{R}\). Then \(\lambda\) is an eigenvalue of \(T_C\) if and only if \(\lambda\) is an eigenvalue of \(T\).

Proof of Theorem 9.11:

The real eigenvalues of \(T\) are the real zeros of the minimal polynomial of \(T\). The real eigenvalues of \(T_C\) are the real zeros of minimal polynomial of \(T_C\). Since their polynomials are the same, we can conclude that the real eigenvalues of \(T, T_C\) are the same.


Theorem 9.12 \(T_C - \lambda I, T_C - \bar{\lambda} I\)

Suppose \(V\) is a real vector space, \(T \in L(V)\), \(\lambda \in \mathbb{C}\), \(j\) is a non-negative integer, and \(u, v \in V\). Then:

\[(T_C - \lambda I)^j (u + iv) = 0 \quad \text{ IFF } \quad (T_C - \bar{\lambda} I)^j (u - iv) = 0\]


Theorem 9.16: Non-real Eigenvalues of \(T_C\) Come in Pairs

Suppose \(V\) is a real vector space, \(T \in L(V)\), and \(\lambda \in \mathbb{C}\). Then \(\lambda\) is an eigenvalue of \(T_C\) IFF \(\bar{\lambda}\) is an eigenvalue of \(T_C\).


Theorem 9.17: Multiplicity of \(\lambda\) Equals Multiplicity of \(\bar{\lambda}\)

Suppose \(V\) is a real vector space, \(T \in L(V)\), \(\lambda \in \mathbb{C}\) is an eigenvalue of \(T_C\). Then the multiplicity of \(\lambda\) as an eigenvalue of \(T_C\) equals the multiplicity of \(\bar{\lambda}\) as an eigenvalue of \(T_C\).


Theorem 9.19: Operator on Odd-dimensional Vector Space Has Eigenvalue

Every operator on an odd-dimensional real vector space has an eigenvalue.

Proof of Theorem 9.19:

Since all complex eigenvalues of \(T_C\) comes in conjugate pairs and the dimensions of generalized eigenspaces add up to the dimension of \(V_C\), we have that at least 1 real eigenvalue exists.


Theorem 9.20 Characteristic Polynomial of \(T_C\)

Suppose \(V\) is a real vector space and \(T \in L(V)\). Then the coefficients of the characteristic polynomial of \(T_C\) are all real.


Definition 9.21: Characteristic Polynomial

Suppose \(V\) is a real vector space and \(T \in L(V)\). Then the characteristic polynomial of \(T\) is defined to be the characteristic polynomial of \(T_C\).


Theorem 9.23: Degree and Zeros of Characteristic Polynomials

Suppose \(V\) is a real vector space and \(T \in L(V)\). Then:

  1. The coefficients of the characteristic polynomial of \(T\) are all real.
  2. The characteristic polynomial of \(T\) has degree \(\dim V\).
  3. The eigenvalues of \(T\) are precisely the real zeros of the characteristic polynomial of \(T\).


Theorem 9.24: Cayley-Hamilton Theorem

Suppose \(T \in L(V)\). Let \(q\) denotes the characteristic polynomial of \(T\), then \(q(T) = 0\).


Theorem 9.26: Characteristic Polynomial is a Multiple of Minimal Polynomial

Suppose \(T \in L(V)\). Then:

  1. The degree of minimal polynomial of \(T\) is at most \(\dim V\).
  2. The characteristic polynomial of \(T\) is a polynomial multiple of the minimal polynomial of \(T\).


Operators on Real Inner Product Spaces

Theorem 9.27: Normal But Not Self-Ajoint Operators

Suppose \(V\) is a 2-dimensional real inner product space and \(T \in L(V)\).

Then the following are equivalent:

  1. \(T\) is normal but not self-adjoint.
  2. The matrix of \(T\) w.r.t every orthonormal basis of \(V\) has the form: \[ \begin{bmatrix} a & -b\\ b & a\\ \end{bmatrix} \] with \(b \neq 0\)
  3. The matrix of \(T\) w.r.t some orthonormal basis of \(V\) has the form: \[ \begin{bmatrix} a & -b\\ b & a\\ \end{bmatrix} \] with \(b > 0\)


Theorem 9.30: Normal Operators and Invariant Subspaces

Suppose \(V\) is an inner-product space, \(T \in L(V)\) is normal, and \(U\) is a subspace of \(V\) that is invariant under \(T\). Then:

  1. \(U^{\perp}\) is invariant under \(T\).
  2. \(U\) is invariant under \(T^*\).
  3. \((T|_U)^* = (T^*)|_U\)
  4. \(T|_U \in L(U)\) and \(T|_{U^{\perp}} \in L(U^{\perp})\) are normal operators.


Theorem 9.34 Characterization of Normal Operators When \(\mathbb{F} = \mathbb{R}\)

Suppose \(V\) is a real inner product space and \(T \in L(V)\). Then the following are equivalent:

  1. \(T\) is normal.
  2. There is an orthonormal basis of \(V\) w.r.t which \(T\) has a block diagonal matrix s.t each block is a \(1 \times 1\) matrix or a \(2 \times 2\) matrix of the form: \[ \begin{bmatrix} a & -b\\ b & a\\ \end{bmatrix} \] with \(b > 0\)


Theorem 9.36: Description of Isometries When \(\mathbb{F} = \mathbb{R}\)

Suppose \(V\) is a real inner product space and \(S \in L(V)\). Then the following are equivalent:

  1. \(S\) is an isometry.
  2. There is an orthonormal basis of \(V\) w.r.t which \(S\) has a block diagonal matrix such that each block on the diagonal is a \(1 \times 1\) matrix containing \(1\) or \(-1\) or is a \(2 \times 2\) matrix of the form: \[ \begin{bmatrix} \cos \theta & -\sin \theta\\ \sin \theta & \cos \theta\\ \end{bmatrix} \] with \(\theta \in (0, \pi)\)


Trace and Determinant

Trace

Definition 10.2: Identity Matrix, \(I\)

Suppose \(n\) is a positive integer. The \(n \times n\) diagonal matrix:

\[ \begin{bmatrix} 1 \theta & 0\\ 0 & \cos 1\\ \end{bmatrix} \]

is called the identity matrix and is denoted \(I\). With respect to every basis of \(V\), the matrix of identity operator \(I \in L(V)\) is the identity matrix.


Definition 10.3, Invertible, inverse, \(A^{-1}\) (non-singular)

A square matrix \(A\) is called invertible if there is a unique matrix \(B\) of the same size such that \(AB = BA = I\). We call \(B\) the inverse of \(A\) and denotes it by \(A^{-1}\).


Theorem 10.4: The Matrix of the Product of Linear Maps

Suppose \(u_1, ..., u_n\) and \(v_1, ..., v_n\) and \(w_1, ..., w_n\) are all bases of \(V\). Suppose \(S, T \in L(V)\). Then:

\[M(ST, (u_1, ..., u_n), (w_1, ..., w_n)) = M(S, (v_1,....,v_n), (w_1, ..., w_n))M(T, (u_1, ..., u_n), (v_1, ..., v_n)\]


Theorem 10.5: Matrix of the Identity w.r.t Two Bases

Suppose \(u_1, ..., u_n\) and \(v_1, ..., v_n\) are bases of \(V\). Then the matrices \(M(I, (u_1, ..., u_n), (v_1, ..., v_n)))\) and \(M(I, (v_1, .., v_n), (u_1, ..., u_n))\) are invertible, and each is the inverse of the other.

Proof of Theorem 10.5:

Let \(u_j = w_j\) in theorem 10.4, then we have:

\[M(II, (u_j), (u_j)) = M(I, (v_j), (u_j))M(I, (u_j)(v_j)) = I\]

interchange the role of \(v_j, u_j\) we have

\[M(II, (v_j), (v_j)) = M(I, (u_j), (v_j))M(I, (v_j)(u_j)) = I\]


Theorem 10.7: Change of Basis Formula

Suppose \(T \in L(V)\). Let \(u_1, ..., u_n\) and \(v_1, ..., v_n\) be bases of \(V\). Let \(A = M(I, (u_1, ..., u_n), (v_1, ..., v_n))\). Then:

\[M(T, (u_1, ..., u_n)) = A^{-1} M(T, (v_1, ..., v_n)) A\]

Proof of Theorem 10.7

By theorem 10.4, replace \(w_j\) with \(u_j\), we have:

\[M(IT, (u_j), (v_j))) = M(I, (v_j), (u_j)) M(T, (u_j), (v_j)) = A^{-1} M(T, (u_j), (v_j))\]

By theorem 10.4, replace \(w_j\) with \(v_j\), we have:

\[M(IT, (u_j), (v_j))) = A^{-1} M(T, (u_j), (v_j)) M(T, (v_j), (v_j)) M(I, (u_j), (v_j)) = A^{-1} M(T, (v_j)) A\]


Definition 10.9: Trace of an Operator

Suppose \(T \in L(V)\):

  • If \(\mathbb{F} = \mathbb{C}\), then the trace of \(T\) is the sum of eigenvalues of \(T\), with each eigenvalue repeated according to its multiplicity.
  • If \(\mathbb{F} = \mathbb{R}\), then the trace of \(T\) is the sum of the eigenvalues of \(T_C\), with each eigenvalue repeated according to its multiplicity.

\[tr(T) = d_1 \lambda_1 + ... + d_m \lambda_m\]

Where \(\lambda_i\) are eigenvalues of \(T_C\) or \(T\) depends on \(\mathbb{F}\), and \(d_i\) are multiplicities of eigenvalues.


Theorem 10.12: Trace and Characteristic Polynomial

Suppose \(T \in L(V)\). Let \(n = \dim V\). Then \(tr(T)\) equals the negative of the coefficient of \(z^{n-1}\) in the characteristic polynomial of \(T\).


Definition 10.13: Trace of a Matrix

The trace of a square matrix \(A\), denoted trace \(A\), is defined to be the sum of the diagonal entries of \(A\).


Theorem 10.14: \(tr(AB) = tr(BA)\)

If \(A, B\) are square matrices of the same size, then:

\[tr(AB) = tr(BA)\]


Theorem 10.15: Trace of Matrix of Operator Does Not Depend on Basis

Let \(T \in L(V)\). Suppose \(v_1, ..., v_n\) and \(u_1, ..., u_n\) are bases of \(V\). Then:

\[tr(M(T, (u_1, ..., u_n))) = tr(M(T, (v_1, ..., v_n)))\]


Theorem 10.16: Trace of an Operator Equals Trace of Its Matrix

Suppose \(T \in L(V)\). Then \(tr(T) = tr(M(T))\).

This implies that the sum of eigenvalues of \(T\) or \(T_C\) is equal to the sum of diagonal of matrix of \(T\) regardless of the basis.


Theorem 10.18: Trace is Additive

Suppose \(S, T \in L(V)\). Then \(tr(S + T) = tr(S) + tr(T)\).


Theorem 10.19: The Identity is not the Difference of \(ST\) and \(TS\)

There do not exist operators \(S, T \in L(V)\) s.t \(ST - TS = I\)


Determinant

Determinant of an Operator, \(\det T\)

Suppose \(T \in L(V)\)

  • If \(\mathbb{F} = \mathbb{C}\), then the determinant of \(T\) is the product of the eigenvalues of \(T\) with each eigenvalue repeated according to its multiplicity.
  • If \(\mathbb{F} = \mathbb{R}\), then the determinant of \(T\) is the product of the eigenvalues of \(T_C\) with each eigenvalue repeated according to its multiplicity.

The determinant of \(T\) is denoted by \(\det T\).


Theorem 10.22: Determinant and Characteristic Polynomial

Suppose \(T \in L(V)\). Let \(n = \dim V\). Then \(\det T\) equals \((-1)^n\) times the constant term of the characteristic polynomial of \(T\).


Theorem 10.23: Characteristic Polynomial, Trace and Determinant

Suppose \(T \in L(V)\). Then the characteristic polynomial of \(T\) can be written as:

\[z^n - (tr(T)) z^{n-1} + ... + (-1)^n \det T\]


Theorem 10.24: Invertible is Equivalent to Nonzero Determinant

An operator on \(V\) is invertible IFF its determinant is nonzero.

Proof of Theorem 10.24:

Suppose \(V\) is a complex vector space and \(T \in L(V)\), then by theorem 5.30, \(T\) is invertible IFF all its eigenvalues are nonzero. Clearly this happens IFF the product of the eigenvalues of \(T\) is not 0.

Suppose \(V\) is a real vector space, then if 0 is not an eigenvalue of \(T\), then the eigenvalues of \(T_C\) do not contain 0, thus the product of eigenvalues does not equal 0.


Theorem 10.25: Characteristic Polynomial of \(T\) Equals \(\det (zI - T)\)

Suppose \(T \in L(V)\). Then the characteristic polynomial of \(T\) equals \(\det (zI - T)\).

Proof of Theorem 10.25:

Suppose \(V\) is a complex vector space, if \(\lambda, z \in \mathbb{C}\), then \(\lambda\) is an eigenvalue of \(T\) IFF \(z - \lambda\) is an eigenvalue of \((zI - T)\) because:

\[-(T - \lambda I) = (\underbrace{(zI - T)}_{\text{operator}} - \underbrace{(z - \lambda)}_{\text{eigenvalue}} I)\]

Then we have:

\[\dim (\text{null }(-(T - \lambda I))^{\dim V}) = \dim (\text{null }((zI - T) - (z - \lambda))^{\dim V})\]

Suppose \(T\) has eigenvalues \(\lambda_1, ..., \lambda_n\) repeated to its multiplicity, then we have \((z - \lambda_1), ...., (z - \lambda_n)\) as eigenvalues of \(zI - T\) repeated to its multiplicity. Thus, we have:

\[\det(zI - T) = (z - \lambda_1) .... (z - \lambda_n)\]

Which is the same as the characteristic polynomial of \(T\) with eigenvalues \(\lambda_1, ..., \lambda_n\) repeated to its multiplicity.

The proof of real vector space is the same as complex vector space.


Definition 10.27: Permutation, \(\text{perm } n\)

  • A permutation of \((1 ,..., n)\) is a list \((m_1, ..., m_n)\) that contains each of the numbers \(1, ..., n\) exactly once.
  • The set of all permutations of \((1, ..., n)\) is denoted \(\text{perm } n.\)

We should think of an element of \(\text{perm } n\) as a rearrangement of the first \(n\) items of the list.


Definition 10.30: Sign of a Permutation

  • The sign of a permutation \((m_1, ..., m_n)\) is defined to be \(1\) if the number of paris of integers \((j, k)\) with \(1 \leq j < k \leq n\) s.t \(j\) appears after \(k\) in the list \((m_1, ..., m_n)\) is even and \(-1\) if the number of such pairs is odd.
  • In other words, the sign of a permutation equals \(1\) if the natural order has been changed an even number of times and equals \(-1\) if the natural order has been changed an odd number times.

The only par of integers \((j, k)\) with \(j < k\) s.t \(j\) appears after \(k\) in the permutation \(2, 1, 3, 4\) is (1, 2), thus, the permutation has sign 1.


Theorem 10.32: Interchanging Two Entries in a Permutation

Interchanging two entries in a permutation multiplies the sign of the permutation by \(-1\).


Definition 10.33: Determinant of a Matrix, \(\det A\)

Suppose \(A\) is an \(n \times n\) matrix:

\[ A = \begin{bmatrix} A_{1,1} & ... & A_{1,n}\\ . & ... & .\\ A_{n,1} & ... & A_{n,n}\\ \end{bmatrix} \]

The determinant of \(A\), denoted \(\det A\), is defined by:

\[\det A = \sum_{(m_1, ..., m_n) \in \text{perm n}} (sign(m_1, ..., m_n)) A_{m_1, 1} .... A_{m_n, n}\]


Theorem 10.36: Interchanging Two Columns in a Matrix

Suppose \(A\) is a square matrix and \(B\) is the matrix obtained from \(A\) by interchanging two columns. Then:

\[\det A = -\det B\]


Theorem 10.37: Matrix with Two Equal Columns

If \(A\) is a square matrix that has two equal columns, then \(\det A = 0\).


Theorem 10.38: Permuting the Columns of a Matrix

Suppose \(A = [A_{\cdot, 1} .... A_{\cdot, n}]\) is an \(n \times n\) matrix and \((m_1, ..., m_n)\) is a permutation. Then:

\[\det (A_{\cdot, m_1} .... A_{\cdot, m_n}) = sign(m_1, ..., m_n) \det A\]


Theorem 10.39: Determinant is a Linear Function of Each Column

Suppose \(k, n\) are positive integers with \(1 \leq k \leq n\). Fix \(n \times 1\) matrices \(A_{\cdot, 1}, ..., A_{\cdot, k}\). Then the function that takes an \(n \times 1\) column vector \(A_{\cdot, k}\) to:

\[\det(A_{\cdot, 1} ... A_{\cdot, k} .... A_{\cdot, n})\]

is a linear map from the vector space of \(n \times 1\) matrices with entries in \(\mathbb{F}\) to \(\mathbb{F}\).


Theorem 10.40: Determinant is Multiplicative

Suppose \(A, B\) are square matrices of the same size. Then:

\[\det(AB) = \det(BA) = \det(A)\det(B)\]


Theorem 10.41: Determinant of Matrix of Operator Does not Depend on Basis

Let \(T \in L(V)\). Suppose \(u_1, ..., u_j\) and \(v_1, ...., v_n\) are bases of \(V\). Then:

\[\det M(T, (u_1, ..., u_n)) = \det M(T, (v_1, ..., v_n))\]


Theorem 10.42: Determinant of an Operator Equals Determinant of Its Matrix

Suppose \(T \in L(V)\). Then \(\det T = \det M(T)\).


Theorem 10.43: Isometries Have Determinant with Absolute Value \(1\)

Suppose \(V\) is an inner product space and \(S \in L(V)\) is an isometry. Then \(|\det S| = 1\).