Matrix Operations

Assume A , B , and C are matrices and α and β are constants.

A=[abcdef]A=[ghijkl]

Matrix Times Vector

Given a m×n matrix and a vector of size n

[a11a1nam1amn]×[b1bn]=b1[a11am1]++bn[a1namn]

Matrix Times Matrix

Dot Product Rule

Given two matrices A and B of m×n and n×p sizes respectively (Where the number of rows in A is equal to the number of columns in B).

The (i,j)th entry of the resulting matrix is row Ai column Bj (Row i of matrix A dotted with column j of matrix B).

Determinants of a Matrix

Definition 1: If A is a 1x1 matrix, then A=[a] , then det(A)=a

Definition 2: If A is a 2x2 matrix, then A=[abcd] , then det(A)=adbc

Definition 3*: If A is an n×m matrix , then

A=[a11a1nam1amn]det(A)=

Determinants are recursive, you split a larger matrix into smaller ones to find the determinant for each and combine them accordingly.

This is because the determinant of A does not change when row ops are applied

There are 2 easy ways to tell if a the determinant of a matrix is 0:

  1. An entire row or column is = 0
  2. 2 rows or columns are equal

Invertible Matrices

If A is a square matrix, a matrix B is the inverse of A if and only if

AB=IBA=I

i.e. if matrix A and B multiply communitatively and the result is the identity matrix A is the inverse of B and vice-versa.

Elementary Matrices

If E=R13R1+R2=[100310001]

e.g.

|1002|10=|100210|

or

|1021|10=|102101|

Importance:

Suppose A -> B takes one operation. If E = I correlated to this same row operation, then B=EA

i.e. performing row operations are just performing matrix multiplications by an elementary matrix in disguise.

This can be expanded express BA in terms of elementary matrices.

e.g. Assuming we get B from A and E1 is the first "row operation"

B=En...E2E1AA=(EnEn1...E2E1)1B

We can also factor out invertible matrices in terms of elementary matrices.

e.g. Assuming A is an invertible matrix and E1 is the first "row operation"

EnEn1...E2E1A=I(EnEn1...E2E1)1I=A

Eigenvalues

Eigenvalues are paired with eigenvectors

To find an eigenvalue we use the formula: $$\det(A-\lambda I)=0$$

This comes from the equivalence Ax=λx=(λI)x where A is a square invertible matrix, and x is our eigenvector

Removing the x vector from our equation moving everything over to one side we get A(λI)=0. The only way this is true is if the determinant of the LHS is equal to 0. Therefore we get det(AλI)=0

Eigenvectors

Eigenvectors are paired with eigenvalues

To find an eigenvalue we first have to find its corresponding eigenvalue or λ.

From there we sub in the value of the eigenvalue to the equation

Theorems

#determinants
You cannot distribute determinants to matrices i.e det(A+B)det(A)+det(B)

If A is a n×n matrix:

  1. det(AT)=det(A)
  2. det(kA)=kndet(A)

#determinants
If A, B are both n×n:

Prof. Hernandez calls it the "beautiful theorem"

This provides a few things consequentially:

  1. det(AB)=det(A)det(B)=det(B)det(A)=det(BA)
  2. det(A1...An)=det(A1)...det(An)
  3. If A1...An the previous formula becomes det(An)=det(A)n

#determinants
Suppose that A is n×n and also that A is invertible:

  1. det(A)0
  2. det(A1)=det(A)1=1det(A)
  3. det(AT)=det(A)

If A is a square matrix, then the following statements are equivalent

  1. A is invertible
  2. The only solution to Ax=0 is x=0
  3. RREF of A is I
  4. A is a product of elementary matrices
  5. The determinant of A0

123451


#determinants
If A is a square matrix and is invertible:

  1. If a multiple of a row is added to another row det(A)=det(A)
  2. If a row is multiplied by a scalar k then det(A)=kdet(A)
  3. If a row is swapped with another then det(A)=det(A)
  4. If any two rows or columns are identical then det(A)=0
  5. If there exists a row or column of zeroes then det(A)=0

#diagonalizeable
A square matrix A is diagonalizable if we can factor A as A=PDP1 where P=invertible and D=diagonal

Ideas:

  1. Diagonal matrices are easy to deal with (compute det, eigenvalue, eigenvector, etc.)
  2. If A is diagonalizable then A inherits those properties from D

#diagonalizeable
A square matrix A is diagonalizable if and only if we can build an invertible matrix P where columns are eigenvectors. In this case if P=[x1,x2,...xn] which is also invertible

Properties

α and β are both constants while A, B, and C are all matrices.