Some notes about linear algebra. God, I fucking hated linear algebra at UNC. Same with differential equations. I have no idea why there's an entire class taught (that's mandatory) on differential equations. Tara told me this joke between math professors at Harvard: "the less diff eq you know, the better you are at math". Yes, it's just the pure mathematicians shitting on the applied math people...but boy do I love that sentiment.

read more
If you can eliminate one of the equations, then one is a scalar multiple of the other and this means that there are infinitely many solutions? 

Upper diagonal means all entries below diagonal are zero. 

If you can't row reduce to a diagonal, write the answer as if you had but just with some stuff like x=2y-z+2 if you had the row 1 -2 +1 2. It's an appropriate solution if the matrix is upper diagonal. If it's not square, you just need to have it that everything below each leading term is zero. 

Remember you can swap rows!! 

Memorize what's required to be row eschelon form???

If we have some leftovers after row reductions, We can assign an arbitrary value to those variables that are to the right of the leading term. Then plug in variables for each of those write the resulting answer (vector) as setting those variables to be the free variable chosen, and writing the others as a combination of the arbitrary variables. These probs have infinitely many solutions. 

Matrix rank is number of leading 1's

For determining if vector b can be written as linear combination of two vectors v and w, you write b=x*v +y*w where x,y are scalars and as you can see, this is a system of linear equations. You can solve this with row reduction and you'll find x and y. 

For Ax, it's much better to think of the first COLUMN of A being scaled by the top entry of x, and continuing down like that. 

You can think about solving a linear system by splitting up the matrix into its column, and figuring out what linear combination of those columns will result in the augmented column. 

If dot product is zero, then two vectors are perpindicular. 

Note that T(v+w)=T(v)+T(w) if and only if transformation is linear. Also T(kv) with k constant. 

The matrix T that reflects a vector about a line is 
    a   b
    b   -a
Where a^2+b^2 =1???

Counterclockwise rotation through theta is 

   Cos(theta).     -sin(theta) 
    Sin(theta).       Cos(theta). 

Vertical shearing is 
    1.   0
    k.    1 
 Where k is a constant. 

Horizontal is 
   1.   k
   0.  1. 

Rotation through pi/2 
   0.   -1 
    1.     0. 
And negative that is rotation through -pi/2



A matrix is only invertible if it's row reduced form is equal to the identity matrix? 

If A is invertible and b is a vector then Ax=b has a unique solution (x=A^-1 *b)   If A is not invertible, then there are infinitely many solutions. When b=0, x=0 is a solution. If A is invertible, this is the only solution. If not invertible, it has infinitely many solutions (kernel is infinite?). 

If A and B are invertible n by n matrices, then BA is invertible as well. 
NOTE (BA)^-1 = A^-1 B^-1 

Associative means matrices ABC=(AB)C=C(AB)=(CA)B. 

Matrix is invertible IFF determinant is nonzero. 

A=(v w) with v and w vectors. Then
Det(A) = mag(v) sin(theta) mag(w)  is the area of the parallelogram spanned by v and w. 

If v and w are parallel, then det(A)=0. 
Det >0 if angle is between zero and pi and <0 if it's between negative pi and zero. 

Learn alternate way to find inverse using determinant?