read more
Studying for Final 



Problem 4 on Practice test: Know how to do it but just need to make note of the general method. Page 117 for finding kernel of linear transformation. So what you can do is row reduce, augment with zero, and then see which columns can be written as a linear combination of others (the ones that have a leading one) and then write each of those columns as the correct linear combination and for each column that doesn’t have a leading 1, that’s a free variable column. 

The equation P=QQ^T given P, a transformation matrix from the standard basis to the basis Q, where Q is matrix of othonormal vectors. So if you have an orthogonal projection, figure out an othonormal basis of the space being projected onto, put them in a matrix Q, and then use that relation to get P which will be the matrix corresponding to that projection/transformation. 




When trying to find 


Remember pretty basically: transformation is linear if T(f+g)=T(f)+T(g) and T(kg)=kT(g). 




2.1 When trying to figure out the matrix of a transformation, just apply the transformation to each of the standard basis vectors, and put the result in a matrix so 

	= |	   	       	           |
	   | T(e1)    T(e2)    T(e3)    |
	   |	    		           |
	



4.1 For finding basis of matrices that commute with a given matrix, you should just get your equations and then split the resulting matrix up into however many matrices based on how many variables are left over. So, you should have the span of some number of matrices, where they have actual values rather than variable values. 



4.2: Isomorphisms and stuff. To figure if it’s an isomorphism: first make sure the dimension of the two spaces match, then one of the following have to be true: write an inverse, kern(T)={0}, im(T)=W. 

	   
4.3: Given a problem that says “find matrix of linear transformation”, you need to apply the transformation to each of the basis vectors, and the resulting matrix “B” will be the transformation T but in that basis. So essentially, just remember: 

B (where B is the transformation in the basis you’re using): 
	
	= |	   	       	         		 |
	   | T(e1)_b    T(e2)_b    T(e3)_b     |
	   |	    		         		 |

The reason you have _b is because you need to apply the transformation to each of the basis vectors (the ones you’re trying to put everything in the form of), and then write the result in terms of that basis. 

For when you are looking at huge matrices with R^2x2, you can just do the operation, write it in basis B, and then to find the B matrix you can just figure out what matrix you need, when multiplied by the input vector (in basis B) in order to produce T(x)_b. 


5.1 o what you do to find the projection of a vector into a subspace is to make the vectors that span that subspace into unit vectors (you also should have made sure that they’re orthogonal), and then all you have to do is dot the vector that’s being projected with each of the unit vectors you found, and then multiply each of those by the unit vector and add the result. So essentially, given x, and trying to find proj_V(x) where V is spanned by two vectors V1 and V2, 

5.2 Grahm smcht process: essentially just creating an othonormal basis…so that’s just like given a vectors that make a space all you do is take the first, divide by its length, and that’s one unit vector and then you take the second one, V2, and subtract off however much of V2 is represented in U1, so you have V2-(V2 dot U1)U1. That gives you the vector in the right direction, which is V2_perp, but you need to make it a unit vector so you divide by its length. 

WHEN THEY ASK FOR “CHANGE OF BASIS MATRIX” THEY’RE ESSENTIALLY ASKING FOR “R” in M=QR (factorization). 

You can construct R with a formula. Just remember that R is lower triangular with the magnitudes of V_perp on the diagonals, and dot products of each of the V’s with U’s in all the other rows. 

Remember that Q is just the othonormal basis vectors lumped into a matrix. 

Refresh QR factorization, make sure you remember how to do this. Do practice problems here. 

5.3: If you have an oathonormal basis Q= U_1, U_2, etc. Then the matrix of the orthogonal projection P=QQ^T where Q^T is the transpose of Q. 

6.1: For determinant of a 3 by 3 matrix, just repeat the first column and second column after the matrix, draw diagonal lines going right and left and plusses on the one’s to the right and minuses on the left, and that is the determinant. Remember the pattern thing for determinants of larger matrices. Determinant of diagonal matrix is just multiplying the diagonals. You can pull constants out of rows and outside the determinant. 
	Determinant of block matrix? Do we need to know this? 

6.2 Det(A) = Det(A^T). Determinant of matrix is equal to determinant of its transpose. Swapping rows switches the sign of the determinant. 

6.3 



7.3 If two matrices are similar, then they have the same eigenvalues (however, they may not have the same eigenvectors). They also have the same determinant and the same 


Eigenvector problems. What you have to do is go through, find the characteristic equation using det(A-lambda*I), and then the real roots of that equation will be eigenvalues. You can have algebraic multiplicity of these roots. Then you plug in each lambda into the matrix A-lambda*I, and then find the kernel of that matrix. The kernel, which could have more than one vector, will be eigenvectors of the matrix which are associated with that eigenvalue. The geometric multiplicity of an eigenvector is just how many vectors are in this kernel. You must have it that the geometric multiplicity < algebraic multiplicity, and if the sum of the geometric multiplicities is not equal to n, then you CAN’T DIAGONALIZE THE MATRIX. Otherwise you can. The eigenbasis is the set of all of these eigenvectors. If it’s diagonalizable, then you just write a matrix that has the eigenvalues that you found on the diagonal. But remember that if one of them had geometric multiplicity two, then you have to put it on the diagonal twice.