Linear Algebra (2)

Linear Algebra (2)

Finite Dimensional Vector Space

Span and Linear Independence

List of vectors are written without surrounding parentheses. For example \((4, 1, 6), (9, 5, 7)\) is a list of length 2 of vectors in \(\mathbb{R}^3\).

Definition 2.3: Linear Combination

A linear combination of a list \(v_1, ..., v_m\) of vectors in \(V\) is a vector of the form:

\[a_1v_1 + ... + a_m v_m\]

where \(a_1, ..., a_m \in \mathbb{F}\)

\((17, -4, 2)\) is a linear combination of list of vectosr \((2, 1, -3), (1, -2, 4)\) with \(a_1 = 6, a_2=5\).

Definition 2.5: Span (Linear Span)

The set of all linear combinations of a list of vectors \(v_1, ..., v_m\) in \(V\) is called the span of \(v_1, ..., v_m\), denoted \(span(v_1, ..., v_m)\). In other words,

\[span(v_1, ..., v_m) = \{a_1v_1 + .... + a_m v_m; a_1, ..., a_m \in \mathbb{F}\}\]

The span of a list of vectors in \(V\) is the smallest subspace of \(V\) containing all the vectors in the list.

Definition 2.8: Spans

If \(span(v_1, .., v_m) = V\), we say that \(v_1, ..., v_m\) spans \(V\).

Definition 2.10: Finite-dimensional Vector Space

A vector space is called finite-dimensional if some list of vectors in it spans the space. (every list has finite length)

Definition 2.11: Polynomial, \(P(\mathbb{F})\)

A function \(p: \mathbb{F} \rightarrow \mathbb{F}\) is called a polynomial with coefficients in \(\mathbb{F}\) if there exist \(a_0, ..., a_m \in \mathbb{F}\) s.t:

\[p(z) = a_0 + a_1z + a_2z^2 + .... + a_m z^m\]

for all \(z \in \mathbb{F}\).

\(P(\mathbb{F})\) is the set of all polynomials with coefficients in \(\mathbb{F}\). (so it is the set of functions) With the usual operations of addition and scalar multiplication, \(P(\mathbb{F})\) is a vector space over \(\mathbb{F}\) or it is a subspace of \(\mathbb{F}^{\mathbb{F}}\).

Definition 2.12: Degree of a Polynomial

A polynomial \(p \in P(\mathbb{F})\) is said to have degree \(m\) if there exist scalars \(a_0 ,..., a_m \in \mathbb{F}\) with \(a_m \neq 0\) s.t

\[p(z) = a_0 + a_1z + ... + a_m z^m\]

for all \(z \in \mathbb{F}\) . If \(p\) has degree \(m\), we write \(\text{deg} \;p = m\).

The polynomial that is identically \(0\) is said to have degree \(-\infty\).

Definition 2.13: \(P_m(\mathbb{F})\)

For \(m\) a nonnegative integer, \(P_m (\mathbb{F})\) denotes the set of all polynomials with coefficients in \(\mathbb{F}\) and degree at most \(m\). Then \(P_m (\mathbb{F})\) is a finite-dimensional vector space for each nonnegative integer \(m\).

Definition 2.15: Infinite-Dimensional Vector Space

A vector space is called infinite-dimensional if it is not finite-dimensional.

Definition 2.17: Linearly Independent

A list \(v_1, ..., v_m\) of vectors in \(V\) is called linearly independent if the only choice of \(a_1, ..., a_m \in \mathbb{F}\) that makes \(a_1v_1 + ... + a_mv_m = 0\) is \(a_1 = ... = a_m = 0\).

The empty list \(()\) is also declared to be linearly independent.

Definition 2.19: Linearly Dependent

A list of vector in \(V\) is called linearly dependent if it is not linearly independent.

In other words, a list \(v_1, ..., v_m\) of vectors in \(V\) is linearly dependent if there exist \(a_1, ..., a_m \in \mathbb{F}\), not all \(0\), such that:

\[a_1v_1 + ... + a_mv_m = 0\]

Lemma 2.21: Linear Dependence Lemma

Suppose \(v_1, ..., v_m\) is a linearly dependent list in \(V\). Then there exists \(j \in \{1, 2, ...., m\}\) such that the following hold:

  1. \(v_j \in span(v_1, ..., v_{j-1})\)
  2. If the \(j\)th term is removed from \(v_1, ...., v_m\), the span of the remaining list equals \(span(v_1, ..., v_m)\).

Definition: 2.23: Length of Linearly Independent List \(\leq\) Length of Spanning List

In a finite-dimensional vector space, the length of every linearly independent list of vectors is less than or equal to the length of every spanning list of vectors (i.e every list of vectors that spans \(V\)).

Definition 2.26: Finite-dimensional Subspaces

Every subspace of a finite-dimensional vector space is finite-dimensional.

Bases

Definition 2.27: Basis

A basis of \(V\) is a list of vectors in \(V\) that is linearly independent and spans \(V\).

Corollary 2.31: Spanning List Contains a Basis

Every spanning list in a vector space can be reduced to a basis of the vector space.

Corollary 2.32: Basis of Finite-Dimensional Vector Space

Every finite-dimensional vector space has a basis.

Corollary 2.33: Linearly Independent List Extands to a Basis

Every linearly independent list of vectors in a finite-dimensional vector space can be extended to a basis of the vector space.

Corollary 2.34: Every Subspace of \(V\) is part of a Direct Sum Equal to \(V\)

Suppose \(V\) is finite-dimensional and \(U\) is a subspace of \(V\). Then there is a subspace \(W\) of \(V\) s.t \(V = U \oplus W\).

Dimension

Corollary 2.35: Basis Length Does Not Depend on Basis

Any two bases of a finite-dimensional vector space have the same length.

Definition 2.36: Dimension, dim \(V\)

The dimension of a finite-dimensional vector space is the length of any basis of the vector space.

The dimension of \(V\) (if \(V\) is finite-dimensional) is denoted by dim \(V\)

Corollary 2.38: Dimension of a Subspace

If \(V\) is finite-dimensional and \(U\) is a subspace of \(V\), then dim \(U\) dim \(V\).

Corollary 2.39: Linearly Independent List of the Right Length is a Basis

Suppose \(V\) is finite-dimensional. Then every linearly independent list of vectors in \(V\) with length dim \(V\) is a basis of \(V\).

Corollary 2.42: Spanning List of the Right Length is a Basis

Suppose \(V\) is finite-dimensional. Then, every spanning list of vectors in \(V\) with length dim \(V\) is a basis of \(V\).

Corollary 2.43: Dimension of a Sum

If \(U_1\) amd \(U_2\) are subspaces of a finite-dimensional vector space, then

\[\text{dim}(U_1 + U_2) = \text{dim } U_1 + \text{dim } U_2 - \dim(U_1 \cap U_2)\]

Linear Maps

The Vector Space of Linear Map

Assume \(V, U, W\) are vector spaces.

Definition 3.2: Linear Map (Linear Transformation)

A linear map from \(V\) to \(W\) is a function \(T: V \rightarrow W\) with the following properties:

  1. Additivity: \[T(v + w) = T(v) + T(w)\]
  2. Homogeneity: \[T(\lambda v) = \lambda (Tv), \; \forall \lambda \in \mathbb{F}, \; \forall v \in V\]

\(Tv = T(v)\)

Notation 3.3: \(L(V, W)\)

The set of all linear maps from \(V\) to \(W\) is denoted \(L(V, W)\).

Corollary 3.5: Linear Maps and Basis of Domain

Suppose \(v_1, ..., v_n\) is a basis of \(V\) and \(w_1, ..., w_n\) is a basis of \(W\). Then there exists a unique linear map \(T: V \rightarrow W\) s.t

\[Tv_j = w_j\]

for each \(j=1, ..., n\)

Definition 3.6: Addition and Scalar Multiplication on \(L(V, W)\)

Suppose \(S,T \in L(V, W)\) and \(\lambda \in \mathbb{F}\). The sum \(S+T\) and the product \(\lambda T\) are the linear maps from \(V\) to \(W\) defined by:

\[(S + T) (v) = Sv + Tv\]

\[\lambda (T)(v) = \lambda (Tv)\]

Corollary 3.7: \(L(W, V)\) is a Vector Space

With the operations of addition and scalar multiplication as defined in Definition 3.6, \(L(V, W)\) is a vector space.

Definition 3.8: Product of Linear Maps

If \(T \in L(U, V)\) and \(S \in L(V, W)\), then the product \(ST \in L(U, W)\) is defined by

\[(ST)(u) = S(T(u))\]

for \(u \in U\). In other words, we can write this as composition:

\[(S \circ T) (u)\]

Corollary 3.9: Algebraic Properties of Products of Linear Maps

Assume all products make sense.

  1. Associativity: \[(T_1T_2)T_3 = T_1(T_2T_3)\]

  2. Identity: \[TI = IT = T\] Whenever \(T \in L(V, W)\), the first \(I\) is the identity map on \(V\), the second \(I\) is the identity map on \(W\).

  3. Distributive: \[(S_1 + S_2) T = S_1T + S_2T\] \[S(T_1 + T_2) = ST_1 + ST_2\]

    Whenever \(T, T_1, T_2 \in L(U, V)\), \(S, S_1, S_2 \in L(V, W)\)

Corollary 3.11: Linear Maps Take \(0\) to \(0\)

Suppose \(T\) is a linear map from \(V\) to \(W\). Then \(T(0) = 0\). In other words, the linear map, maps additive identity in \(V\) to additive identity in \(W\).


Null Spaces and Ranges

Definition 3.12: Null Space, null \(T\)

For \(T \in L(V, W)\), the null space of \(T\), denoted null \(T\), is the subset of \(V\) consisting of those vectors that \(T\) maps to \(0\):

\[\text{null } T = \{v \in V: T(v) = 0\}\]

Corollary 3.14: The Null Space is a Subspace

Suppose \(T \in L(V, W)\). Then null \(T\) is a subspace of \(V\).

Definition 3.15: Injective (One to One)

A fuinction \(T: V \rightarrow W\) is called injective if \(T(u) = T(v) \implies u = v\).

Corollary 3.16: Injectivity is Equivalent to Null Space Equals \(\{0\}\)

Let \(T \in L(V, W)\). Then \(T\) is injective if and only if null \(T = \{0\}\).

Definition 3.17: Range (Image)

For \(T: V \rightarrow W\), the range of \(T\) is the subset of \(W\) consisting of those vectors that are of the form \(T(v)\) for some \(v \in V\):

\[\text{range } T = \{T(v): v \in V\}\]

Corollary 3.19: The Range is a Subspace

If \(T \in L(V, W)\), then range \(T\) is a subspace of \(W\).

Definition 3.20: Surjective (On to)

A function \(T: V \rightarrow W\) is called surjective if its range equals \(W\):

\[\text{range } T = W\]

Theorem 3.22: Fundamental Theorem of Linear Maps

Suppose \(V\) is finite-dimensional and \(T \in L(V, W)\). The range \(T\) is finite-dimensional and

\[\dim V = \dim \text{null } T + \dim \text{range } T\]

Corollary 3.23: A Map to a Smaller Dimensional Space is not Injective

Suppose \(V\) and \(W\) are finite-dimensional vector spaces such that \(\dim V > \dim W\). Then no linear map from \(V\) to \(W\) is injective.

Corollary 3.24: A Map to a Larger Dimensional Space is not Surjective

Suppose \(V\) and \(W\) are finite-dimensional vector spaces such that \(\dim V < \dim W\). Then no linear map from \(V\) to \(W\) is surjective.

Corollary 3.26: Homogeneous System of Linear Equations (\(Ax = 0\))

A homogeneous system of linear equations with more variables than equations has nonzero solutions. (followed by Corollary 3.23)

Corollary 3.29: Inhomogeneous System of Linear Equations (\(Ax = c\))

An inhomogeneous system of linear equations with more equations than variables has no solution for some choice of the constant terms. (followed by Corollary 3.24)

Matrices

If \(T\) is a linear map from \(\mathbb{F}^n\) to \(\mathbb{F^m}\), then unless stated, assume the bases in question are the standard ones where the \(k\)th basis vector is \(1\) in the \(k\)th slot and \(0\) o.w.

Definition 3.30: Matrix \(A_{j,k}\)

Let \(m, n\) denote positive integers. An \(m \times n\) matrix \(A\) is a rectangular array of elements of \(\mathbb{F}\) with \(m\) rows and \(n\) columns:

\[ A = \begin{bmatrix} A_{1, 1} & ... & A_{1, n}\\ . & ... & .\\ . & ... & .\\ . & ... & .\\ A_{m, 1} & ... & A_{m, n} \end{bmatrix} \] The notation \(A_{j, k}\) denotes the entry in row \(j\), column \(k\) of \(A\).

Definition 3.32: Matrix of a Linear Map, \(M (T)\)

Suppose \(T \in L(V, W)\) and \(v_1, ..., v_n\) is a basis of \(V\) and \(w_1, ..., w_m\) is a basis of \(W\). The matrix of \(T\) with respect to these bases is the \(m\) by \(n\) matrix \(M(T)\) whose entries \(A_{j, k}\) are defined by:

\[T(v_k) = A_{1, k} w_1 + .... + A_{m, k} w_m\]

If the bases are not clear from the context, then the notation \(M(T, (v_1, ...., v_n), (w_1, ...., w_m))\) are used.

We can think of the \(k\)th column of \(M(T)\) as \(T\) applied to the \(k\)th standard basis vector and formed by standard basis in \(W\).

Suppose \(T \in L(\mathbb{F}^2, \mathbb{F}^3)\) is defined as \[T(x, y) = (x + 3y, 2x+5y, 7x + 9y)\] then, the matrix of \(T\) w.r.t the standard bases of \(\mathbb{F}^2\) and \(\mathbb{F}^3\) is: \[T(1, 0) = A_1 = (1, 2, 7)\] \[T(0, 1) = A_2 = (3, 5, 9)\] \[M(T) = [A_1, A_2]\]

Definition 3.35: Matrix Addition

The sum of two matrices \(A, C\) of the same size is the matrix obtained by adding corresponding entries in the matrices:

\[(A + C)_{j, k} = A_{j, k} + C_{j, k}\]

Corollary 3.36: The Matrix of the Sum of Linear Maps

Suppose \(S, T \in L(V, W)\). Then \(M(S + T) = M(S) + M(T)\)

Definition 3.37: Scalar Multiplication of a Matrix

The product of a scalar \(\lambda\) and a matrix \(A\) is the matrix obtained by multiplying each entry in the matrix by the scalar:

\[(\lambda A)_{j, k} = \lambda A_{j, k}\]

Corollary 3.38: The Matrix of a Scalar Times a Linear Map

Suppose \(\lambda \in \mathbb{F}\) and \(T \in L(V, W)\). Then \(M(\lambda T) = \lambda M(T)\).

Notation 3.39: \(\mathbb{F}^{m \times n}\)

For \(m\) and \(n\) positive integers, the set of all \(m\)-by-\(n\) matrices with entries in \(\mathbb{F}\) is denoted by \(\mathbb{F}^{m, n}\).

Corollary 3.40: \(\dim \mathbb{F}^{m, n} = mn\)

Suppose \(m, n\) are positive integers. With addition and scalar multiplication defined in 3.35, 3.35, \(\mathbb{F}^{m, n}\) is a vector space with dimension \(mn\).

Definition 3.41: Matrix Multiplication

Suppose \(A\) is an \(m\) by \(n\) matrix and \(C\) is an \(n\) by \(p\) matrix. Then \(AC\) is defined to be the \(m\) by \(p\) matrix whose entry in row \(j\), column \(k\), is given by the following equation:

\[(AC)_{j,k} = \sum^{n}_{r=1} A_{j, r}C_{r, k}\]

In other words, the entry in row \(j\), column \(k\), of \(AC\) is computed by taking row \(j\) of \(A\) and column \(k\) of \(C\), multiplying together corresponding entries and then summing.

Corollary 3.43: The Matrix of the Product of Linear Maps

If \(T \in L(U, V)\) and \(S \in L(V, W)\), then \(M(ST) = M(S) M(T)\)

Notation 3.44: \(A_{j, \cdot}, A_{\cdot, k}\)

Suppose \(A\) is an \(m\) by \(n\) matrix:

  • If \(1 \leq j \leq m\), then \(A_{j, \cdot}\), donotes the \(1\) by \(n\) matrix consisting of row \(j\) of \(A\).
  • If \(1 \leq k \leq n\), then \(A_{\cdot, k}\) denotes the \(n\) by \(1\) matrix consisting of column \(k\) of \(A\).

Thus, we can think of matrix multiplication as row times column.

Corollary 3.49: Column of Matrix Product Equals Matrix Times Column

Suppose \(A\) is an \(m\) by \(n\) matrix and \(C\) is an \(n\) by \(p\) matrix. Then

\[(AC)_{\cdot, k} = A C_{\cdot, k}\]

for \(1 \leq k \leq p\).

Corollary 3.52: Linear Combination of Columns

Suppose \(A\) is an \(m\) by \(n\) matrix and \(c = [c_1 .... c_n]^T\) is an \(n\) by \(1\) matrix, then:

\[Ac = c_1 A_{\cdot, 1} + ... + c_n A_{\cdot, n}\]

In other words, \(Ac\) is a linear combination of the columns of \(A\), with the scalars that multiply the columns coming from \(c\).

Corollary 3.53: Row of Matrix Product Equals Row Times Matrix

Suppose \(A\) is an \(m\) by \(n\) matrix and \(C\) is an \(n\) by \(p\) matrix, then:

\[(AC)_{j, \cdot} = A_{j, \cdot} C\]

for \(1 \leq j \leq m\).

Corollary 3.54: Linear Combination of Rows

Suppose \(a = [a_1 ... a_n]\) is a \(1 \times n\) matrix and \(C\) is a \(n \times p\) matrix:

\[aC = a_1 C_{1, \cdot} + ... + a_n C_{n, \cdot}\]


Invertibility and Isomorphic Vector Spaces

Definition 3.53: Invertible, Inverse

  • A linear map \(T \in L(V, W)\) is called invertible if there exists a linear map \(S \in L(W, V)\) s.t \(ST\) equals the identity map on \(V\) and \(TS\) equals the identity map on \(W\).
  • A linear map \(S \in L(W, V)\) satisfying \(ST = I\) and \(TS = I\) is called an inverse of \(T\) (note that the first \(I\) is the identity map on \(V\) and the second \(I\) is the identity map on \(W\)).

Definition 3.54: Inverse is Unique

An invertible linear map has a unique inverse.

Notation 3.55: \(T^{-1}\)

If \(T\) is invertible, then its inverse is denoted by \(T^{-1}\). In other words, if \(T \in L(V, W)\) is invertible, then \(T^{-1}\) is the unique element of \(L(W, V)\) s.t \(T^{-1}T = I\) and \(TT^{-1} = I\).

Corollary 3.56: Invertibility is Equivalent to Injectivity and Surjectivity

A linear map is invertible if and only if it is injective and surjective.

Definition 3.58: Isomorphism, Isomorphic (Equal Shape)

  • An isomorphism is an invertible linear map.
  • Two vector space are called isomorphic if there is an isomorphism from one vector space onto the other one.

One way to think of isomorphic is that we can always label \(v \in V\) by \(T(v) \in W\), because there is always a linear map that maps \(T(v) \in W\) back to \(v \in V\).

Corollary 3.59: Dimension Shows Whether Vector Spaces are Isomorphic

Two finite-dimensional vector spaces over \(\mathbb{F}\) are isomorphic if and only if they have the same dimension.

Corollary 3.60: \(M\) is a Linear Mapping

Given bases of \(V\) and \(W\), \(M\) is a linear mapping following 3.36, 3.38.

Corollary 3.60: \(L(V, W)\) and \(\mathbb{F}^{m,n}\) are Isomorphic

Suppose \(v_1, ..., v_n\) is a basis of \(V\) and \(w_1, ..., w_m\) is a basis of \(W\). Then \(M\) is an isomorphism between \(L(V, W)\) and \(\mathbb{F}^{m, n}\).

Corollary 3.61: \(\dim L(V, W) = (\dim V) (\dim W)\)

Suppose \(V, W\) are finite-dimensional. Then \(L(V, W)\) is finite dimensional and:

\[\dim L(V, W) = (\dim V) (\dim W)\]

Definition 3.62: Matrix of a Vector, \(M(v)\)

Suppose \(v \in V\) and \(v_1, ...., v_n\) is a basis of \(V\). The matrix of \(v\) w.r.t this basis is the \(n\) by \(1\) matrix:

\[ M(v) = \begin{bmatrix} c_1\\ .\\ .\\ .\\ c_n \end{bmatrix} \]

where \(c_1, ..., c_n\) are the scalars such that:

\[v = c_1 v_1 + ... + c_n v_n\]

\(M(v)\) depends on the bases, so they should be clear from the context and thus it is not included in the notation. M is an isomorphism of \(V\) onto \(\mathbb{F}^{n, 1}\).

Definition 3.64: \(M(T)_{\cdot, k} = M(T(v_k))\)

Suppose \(T \in L(V, W)\) and \(v_1, ..., v_n\) is a basis of \(V\) and \(w_1, ..., w_m\) is a basis of \(W\). Let \(1 \leq k \leq n\). Then the \(k\)th column of \(M(T)\), which is denoted by \(M(T)_{\cdot, k}\), equals \(M(T(v_k))\).

Theorem 3.65: Linear Maps Act Like Matrix Multiplication

Suppose \(T \in L(V, W)\) and \(v \in V\). Suppose \(v_1, ..., v_n\) is a basis of \(V\) and \(w_1, ..., w_m\) is a basis of \(W\). Then:

\[M(T(v)) = M(T) M(v)\]


We can think of every linear map as Matrix because, we can identify any \(T(v) \in W\) by:

\[M(T(v)) = M(T) M(v)\]

\(M(T(v))\) can be used to recover any \(T(v)\).

Definition 3.67: Operator, \(L(V)\)

  • A linear map from a vector space to itself is called an operator.
  • The notation \(L(V)\) denotes the set of all operators on \(V\): \[L(V) = L(V, V)\]

Theorem 3.69: Injectivity is Equivalent ot Surjectivity in Finite Dimensions

Suppose \(V\) is finite-dimensional and \(T \in L(V)\). Then the following are equivalent:

  1. T is invertible.
  2. T is injective.
  3. T is surjective.

In infinite-dimensional vector space, neither injectivity nor surjectivity implies invertibility.

Products and Quotients of Vector Spaces

As usual when dealing with more than oen vector space, all the vector spaces in use should be over the same field.

Definition 3.71: Product of Vector Spaces

Suppose \(V_1, ..., V_m\) are vector spaces over \(\mathbb{F}\).

  • The product \(V_1 \times V_2 \times .... \times V_m\) is defined by: \[V_1 \times ... \times V_2 = \{(v_1, ..., v_m): v_1 \in V_1 , ..., v_m \in V_m\}\]
  • Addition on \(V_1 \times V_2 \times ... \times V_m\) is defined by: \[(u_1, ...., u_m) + (v_1, ...., v_m) = (u_1 + v_1 , ...., u_m + v_m)\]
  • Scalar multiplication on \(V_1 \times .... \times V_m\) is defined by: \[\lambda (v_1, ..., v_m) = (\lambda v_1 , ..., \lambda v_m)\]

Elements of \(P_2(\mathbb{R}) \times \mathbb{R}^3\) are lists of length 2:

\[(5 - 6x + 16x^2, (1, 2, 3)) \in P_2(\mathbb{R}) \times \mathbb{R}^3\]

Theorem 3.73: Product of Vector Space is a Vector Space

Suppose \(V_1, ..., V_m\) are vector spaces over \(\mathbb{F}\). Then \(V_1 \times ... \times V_m\) is a vector space over \(\mathbb{F}\).

Elements in \(\mathbb{R}^2 \times \mathbb{R}^3\) has length 2 and elements in \(\mathbb{R}^5\) has length 5, so they are not identical. But the linear map that takes a vector from first space to second space is clearly an isomorphism, thus these two vector spaces are isomorphic.

Theorem 3.76: Dimension of a Product is the Sum of Dimensions

Suppose \(V_1, ..., V_m\) are finite-dimensional vector spaces. Then \(V_1 \times ... \times V_m\) is finite-dimensional and:

\[\dim(V_1 \times ... \times V_m) = \dim V_1 + ... + \dim V_m\]

Proof of Theorem 3.76

Choose a basis of each \(V_j\). For each basis vector of each \(V_j\), consider the element of \(V_1 \times ... \times V_m\) that equals the basis vector in the \(j\)th slot and 0 in other slots. The list of all such vectors is linearly independent and spans the product space. Thus, it is a basis of \(V_1 \times ... \times V_m\). The length is \(\dim V_1 + ... + \dim V_m\).

Theorem 3.77: Products and Direct Sums

Suppose that \(U_1, ..., U_m\) are subspaces of \(V\). Define a linear map \(\Gamma: U_1 \times .... \times U_m \rightarrow U_1 + ... + U_m\) by:

\[\gamma (u_1, ...., u_m) = u_1 + ... + u_m\]

Then \(U_1 + ... + U_m\) is a direct sum if and only if \(\Gamma\) is injective (By definition of sum of subspace, we have \(\Gamma\) surjective, so we can replace the injective by invertible).

Proof of Theorem 3.77:
  1. \(\leftarrow\): The linear map \(\Gamma\) is injective iff the only way to write \(0\) as a sum \(u_1 + ... + u_m\) is by taking every \(u_j = 0\), this leads to the Def 1.44.
  2. \(\rightarrow\): by Def 1.44, the only way for \(U_1 + ... + U_m\) to be direct sum is to have \(\text{null } \Gamma = \{0\}\), which implies injectivity.

Theorem 3.78: A Sum is a Direct Sum IFF Dimensions Add Up

Suppose \(V\) is finite-dimensional and \(U_1, ..., U_m\) are subspaces of \(V\). Then \(U_1 + ... + U_m\) is a direct sum IFF:

\[\dim (U_1 + ... + U_m) = \dim (U_1) + ... + \dim (U_m)\]

Proof of Theorem 3.78:

Suppose that \(\Gamma\) is invertible, then by 3.22:

\[\dim (U_1 \times ... \times U_m) = \dim \text{null } (\Gamma) + \dim \text{range } \Gamma\]

Since \(\Gamma\) is injective, we have:

\[\dim (U_1 \times ... \times U_m) = \dim \text{range } \Gamma\]

Since \(\Gamma\) is surjective, we have:

\[\dim \text{range } \Gamma = \dim (U_1 + ... + U_m)\] \[\implies \dim (U_1 \times ... \times U_m) = \dim (U_1 + ... + U_m)\]

By 3.76 and 3.77, we have \(\dim (U_1 + ... + U_m)\) is a direct sum and:

\[\dim (U_1) + ... + \dim(U_m) = \dim (U_1 + ... + U_m)\]

Definition 3.79: \(v + U\)

Suppose \(v \in V\) and \(U\) is a subspace of \(V\). Then \(v + U\) is the subset of \(V\) defined by:

\[v + U = \{v + u: u \in U\}\]

Definition 3.81: Affine subset, Parallel

  • An affine subset of \(V\) is a subset of \(V\) of the form \(v + U\) for some \(v \in V\) and some subspace \(U\) of \(V\).
  • For \(v \in V\) and \(U\) a subspace of \(V\), the affine subset \(v + U\) is said to be parallel to \(U\).

If \(U = \{(x, y, 0) \in \mathbb{R}^3: x, y \in \mathbb{R}\}\), then the affine subsets of \(\mathbb{R}^3\) parallel to \(U\) are the planes in \(\mathbb{R}^3\) that are parallel to the \(xy\)-plane \(U\) in the usual sense.

Definition 3.83: Quotient Space, \(V / U\)

Suppose \(U\) is a subspace of \(V\). Then the quotient space \(V / U\) is the set of all affine subsets of \(V\) parallel to \(U\). In other words:

\[V / U = \{v + U: v \in V\}\]

If \(U = \{(x, 2x) \in \mathbb{R}^2: x \in \mathbb{R}\}\), then \(\mathbb{R}^2 / U\) is the subset of all lines with slope of \(2\). If \(U\) is a line in \(\mathbb{R}^3\) containing the origin, then \(\mathbb{R}^3 / U\) is the set of all lines in \(\mathbb{R}^3\) parallel to \(U\).

Theorem 3.85: Two Affine Subsets Parallel to \(U\) are Equal or Disjoint

Suppose \(U\) is a subspace of \(V\) and \(v, w \in V\). Then the following are equivalent:

  1. \(v - w \in U\)
  2. \(v + U = w + U\)
  3. \((v + U) \cap (w + U) = \emptyset\)
Proof of Theorem 3.85

Assume \((1)\) holds, then we have for some \(u \in U\):

\[v + u = w + \underbrace{(v - w + u)}{\in U} \in w + U\]

Since the additive inverse \(w - v\), is also in \(U\),

\[w + u = v + \underbrace{(w - v + u)}{\in U} \in v + U\]

Then we have:

\[v + U \subseteq w + U\]

and

\[w + U \subseteq v + U\]

Thus, \(v + U = w + U\), this implies \((3)\) holds.

Assume \((3)\) holds, then \(\exists \; u_1, u_2 \in U\), then we have:

\[v + u_1 = w + u_2 \implies v - w = u_2 - u_1 \in U\]

Since \((1) \implies (2)\), we have \((3) \implies (2)\).

Definition 3.86: Addition and Scalar Multiplication on \(V / U\)

Suppose \(U\) is a subspace of \(V\). Then addition and scalar multiplication are defined on \(V / U\) by:

\[(v + U) + (w + U) = (v + w) + U\] \[\lambda (v + U) = (\lambda v) + U\]

for \(v, w \in V\) and \(\lambda \in \mathbb{F}\).

Theorem 3.87 Quotient Space is a Vector Space

Suppose \(U\) is a subspace of \(V\). Then \(V / U\), with the operations of addition and scalar multiplication as defined above, is a vector space.

Notice the additive inverse of \(v + U\) is \((-v) + U\), and additive identity is \(0 + U\).

Definition 3.88: Quotient Map, \(\pi\)

Suppose \(U\) is a subspace of \(V\). The quotient map \(\pi\) is the linear map \(\pi : V \rightarrow V / U\) defined by:

\[\pi(v) = v + U\]

for \(v \in V\).

Theorem 3.89: Dimension of a Quotient Space

Suppose \(V\) is finite-dimensional and \(U\) is a subspace of \(V\). Then:

\[\dim V / U = \dim V - \dim U\]

Proof of Theorem 3.89

Since \(0 + U\) is the additive identity in \(V / U\), so it is the set of \(U\), and we can clearly see that the range \(\pi = V / U\). Thus:

\[\dim V = \dim U + \dim V / U \implies \dim V / U = \dim V - \dim U\]

Definition 3.90: \(\tilde{T}\)

Suppose \(T \in L(V, W)\). Define \(\tilde{T}: V / (\text{null } T) \rightarrow W\) by:

\[\tilde{T} (v + \text{null } T) = T(v)\]

Theorem 3.91: Null Space and Range of \(\tilde{T}\)

Suppose \(T \in L(V, W)\). Then

  1. \(\tilde{T}\) is a linear map from \(V / \text{null } T\) to \(W\).
  2. \(\tilde{T}\) is injective.
  3. \(\text{range } \tilde{T} = \text{range }T\).
  4. \(V / (\text{null } T)\) is isomorphic to \(\text{range }T\).

Duality

Definition 3.92: Linear Functional

A linear functional on \(V\) is a linear map from \(V\) to the scalar field \(\mathbb{F}\). In other words, a linear functional is an element of \(L(V, \mathbb{F})\).

Define \(\phi: \mathbb{R}^3 \rightarrow \mathbb{R}\) by \(\phi(x, y, z) = 4x - 5y + 2z\). Then \(\phi\) is a linear functional on \(\mathbb{R}^3\)

Definition 3.94: Dual Space, \(V^{\prime}\)

The dual space of \(V\), denoted \(V^{\prime}\), is the vector space of all linear functionals on \(V\). In other words:

\[T^{\prime} = L(V, \mathbb{F})\]

Theorem 3.95: \(\dim V^{\prime} = \dim V\)

Suppose \(V\) is finite-dimensional. Then \(V^{\prime}\) is also finite-dimensional and \(\dim V^{\prime} = \dim V\).

Proof of Theorem 3.95:

By Corollary 3.6.1, we have \(\dim L(V, \mathbb{F}) = \dim (V) \dim (\mathbb{F}) = \dim V\)

Definition 3.96: Dual Basis

If \(v_1, ..., v_n\) is a basis of \(V\), then the dual basis of \(v_1, ..., v_n\) is the list \(\psi_1 , ..., \psi_n\) of elements of \(V^{\prime}\), where each \(\psi_j\) is a linear functional on \(V\) s.t:

\[ \psi_j(v_k)= \begin{cases} 1, \quad \text{if} \;\; k = j\\ 0, \quad \text{if} \;\; k \neq j \end{cases} \]

What is the dual basis of the standard basis \(e_1, ..., e_n\) of \(\mathbb{F}^n\)?

For \(1 \leq j \leq n\), define \(\psi_j\) to be the linear functional on \(\mathbb{F}^n\) that:

\[\psi_j(x_1, ..., x_n) = x_j, \quad \quad (x_1, ..., x_n) \in \mathbb{F}^n\] Clearly: \[ \psi_j(e_k)= \begin{cases} 1, \quad \text{if} \;\; k = j\\ 0, \quad \text{if} \;\; k \neq j \end{cases} \]

Theorem 3.98: Dual Basis is a Basis of the Dual Space

Suppose \(V\) is finite-dimensional. Then the dual basis of a basis of \(V\) is a basis of \(V^{\prime}\).

Proof of Theorem 3.98

Suppose \(v_1, ..., v_n\) is a basis of \(V\), let \(\psi_1, ..., \psi_n\) be dual basis. Then, for all \(v_j \in (v_1, ..., v_n)\):

\[a_1 \psi_n + ... + a_n \psi_n = 0 \implies (a_1 \psi_n + ... + a_n \psi_n) (v_j) = a_j = 0\]

Thus, all \(a_j = 0\), the dual basis is independent.

Since \(\psi_1, ..., \psi_n\) is independent, by theorem 3.95, 2.39, it is a basis of the dual space.

Definition 3.99: Dual Map, \(T^{\prime}\)

If \(T \in L(V, W)\), then the dual map of \(T\) is the linear map \(T^{\prime} \in L(W^{\prime}, V^{\prime})\) defined by \(T^{\prime} (\psi) = \psi \circ T\) for \(\psi \in W^{\prime}\). so:

\[T^{\prime} (\psi): V \rightarrow \mathbb{F}\]

Theorem 3.101: Algebraic Properties of Dual Maps

  1. \((S+T)^{\prime} = S^{\prime} + T^{\prime}\)
  2. \((\lambda T)^{\prime} = \lambda T^{\prime}\) for all \(\lambda \in \mathbb{F}\) and all \(T \in L(V, W)\)
  3. \((ST)^{\prime} = T^{\prime}S^{\prime}\) for all \(T \in L(U, V)\) and all \(S \in L(V, W)\)
Proof of Theorem 3.101
  1. \((S + T)^{\prime} (\psi) = \psi \circ (S + T) = \psi \circ S + \psi \circ T = S^{\prime} + T^{\prime}\)
  2. \((\lambda T)^{\prime} (\psi)= \lambda \psi \circ T = \lambda T^{\prime}\)
  3. \((ST)^\prime (\psi) = \psi \circ (ST) = (\psi \circ S) \circ T = T^{\prime} (\psi \circ S) = T^{\prime} (S^{\prime} (\psi)) = T^{\prime} S^{\prime}\) by definition 3.8

Definition 3.102: Annihilator, \(U^0\)

For \(U \subseteq V\), the annihilator of \(U\), denoted \(U^0\) is defined by:

\[U^0 = \{\psi \in V^{\prime}: \psi (u) = 0, \; \forall u \in U\}\]

Theorem 3.105: The Annihilator is a Subspace

Suppose \(U \subseteq V\). Then \(U^0\) is a subspace of \(V^\prime\). Where \(0 \in U^0\) the is \(0\) linear functional that maps every \(v \in V\) to \(0 \in \mathbb{F}\).

Theorem 3.106: Dimension of the Annihilator

Suppose \(V\) is finite-dimensional and \(U\) is a subspace of \(V\). Then:

\[\dim U + \dim U^0 = \dim V\]

Theorem 3.107: The Null Space of \(T^{\prime}\)

  1. null \(T^\prime\) = \((\text{range } T)^0\)
  2. \(\dim \text{null }T^{\prime} = \dim \text{null }T + \dim W - \dim V\)

Theorem 3.108: \(T\) Surjective is Equivalent to \(T^{\prime}\) Injective

Suppose \(V, W\) are finite-dimensional and \(T \in L(V, W)\). Then \(T\) is surjective iff \(T^{\prime}\) is injective.

Theorem 3.109: The Range of \(T^{\prime}\)

Suppose \(V, W\) are finite-dimensional and \(T \in L(V, W)\). Then:

  1. \(\dim \text{range }T^{\prime} = \dim \text{range } T\)
  2. \(\text{range }T^{\prime} = (\text{null } T)^0\)

Theorem 3.110: \(T\) injective is equivalent to \(T^{\prime}\) Surjective

Suppose \(V, W\) are finite-dimensional and \(T \in L(V, W)\). Then \(T\) is injective if and only if \(T^{\prime}\) is surjective.

Matrix of the Dual of a Linear Map

Definition 3.111: Transpose, \(A^T\)

The transpose of a matrix \(A\), denoted \(A^T\), is the matrix obtained from \(A\) by interchanging the rows and columns. More specifically, if \(A\) is an \(m \times n\) matrix, then \(A^T\) is a \(n \times m\) matrix whose entries are given by:

\[(A^T)_{k, j} = A_{j, k}\]

Theorem 3.113:

If \(A\) is an \(m \times n\) matrix and \(C\) is an \(n \times p\) matrix:

\[(AC)^T = C^TA^T\] \[(A + C)^T = A^T + C^T\] \[(\lambda A)^T = \lambda A^T\]

Theorem 3.114: The Matrix of \(T^{\prime}\) is the Transpose of the Matrix of \(T\)

Suppose \(T \in L(V, W)\), then \(M(T^{\prime}) = (M(T))^T\)

Definition 3.115: Row Rank, Column Rank

Suppose \(A\) is an \(m \times n\) matrix with entries in \(\mathbb{F}\).

  • The row rank of \(A\) is the dimension of the span of the rows of \(A\) in \(\mathbb{F}^{1, n}\).
  • The column rank of \(A\) is the dimension of the span of the columns of \(A\) in \(\mathbb{F}^{m , 1}\)

Theorem 3.117: Dimension of range \(T\) Equals Column Rank of \(M(T)\)

Suppose \(V, W\) are finite-dimensional and \(T \in L(V, W)\). Then \(\dim \text{range } T\) equals the column rank of \(M(T)\).

Proof of Theorem 3.117:

Assume \(v_1, ..., v_n, w_1, ..., w_m\) are basis of \(V, W\) respectively. Since \(M\) is an isomorphism from \(W \rightarrow \mathbb{F}^{m \times 1}\) (def 3.62), we have \(M: \text{span }(Tv_1, ..., Tv_n) \rightarrow \text{range } M\) isomorphic. Then:

\[M(a_1 Tv_1 + ... + a_n Tv_n) = a_1 M (Tv_1) + .... + a_n M(Tv_n) \implies \text{span }(M(Tv_1), ..., M(Tv_n)) = \text{range } M\]

Thus, \(M: \text{span }(Tv_1, ..., Tv_n) \rightarrow \text{span }(M(Tv_1), ..., M(Tv_n))\) isomorphic and by corollary 3.59, we have:

\[\dim \text{span }(Tv_1, ..., Tv_n) = \dim \text{span }(M(Tv_1), ..., M(Tv_n))\]

in other words, we have:

\[\dim \text{range } T = \text{span } (\text{columns of $A$}) = \text{The column rank of $M(T)$}\]

Theorem 3.118: Row Rank Equals Column Rank

Suppose \(A \in \mathbb{F^{m, n}}\). Then the row rank of \(A\) equals the column rank of \(A\).

Definition 3.119: Rank

The rank of a matrix \(A \in \mathbb{F}^{m, n}\) is the column rank of \(A\).