read more
5 questions of similar size. Just chapter 3. 

To determine linear dependence of columns of a matrix (for determining kernel for instance), all you need to do is row reduce as much as possible essentially so that you have 1’s going down for each column, and then find a way to combine each entry with the others in its row so that the result is zero. You can do this by multiplying columns by constants. Typically looking for those that have zeros in all other entries. 
| 1  0  -1 | 0
| 0  1   2 |  0. So we have to multiply the second column by negative twice the others in order to get it equal zero. So the answer would be t times first, -2t times second, and t times third. 

Practice more “find the span of the kernel” problems. 

If you are tasked with coming up with a description of the image of a transformation, and you come to a span of two vectors, that describes a plane. While one describes just a line. 

If you’re trying to determine if one vector can be written as a linear combination of two others, put the column of interest in as the augmentation column and then do row reduction. If you can get a fully row reduced matrix in the left hand side, then the augmentation column is the linear combination required to get the third column. 

Given a plane x+2y+3z=0, the line perpendicular to it is spanned by vector [1 2 3]. That is, the coefficients of the variables. 

Parameterization of an eclipse???

Write down random properties of kernel and image. Such as what happens to them when you take a transformation to the n, or 

Relook at 40-52. 

Remember: subspace must include 0, closed under scalar multiplication, and under vector addition. 

If you have two subspaces, their intersection will also be a subspace while their union may not be a subspace. 

Remember to DIVIDE rows when doing row reduction. 

Note that ker(A) = 0 and ker(b) = 0 implies ker(AB)=0. 

When trying to find basis of a kernel given a matrix, can you do row reduction and then find the vectors? NO GO BACK AND USE THE ORIGINAL ONES 

To find a vector perp to another vector/multiple vectors, set up a system of equations and try to figure out what to multiply each column by to get zero. That’s essentially like figuring out what vector has a dot product of zero with your given vector(s). 

When you’re trying to find a basis of something, anything, and you’re given a system of equations. Just do what you would normally do when trying to find the basis of the kernel and it’ll probably work. 

If you row reduce, is the span of the image of the matrix also equal to the span of non redundant vectors of the row reduced matrix? Or do you just have to find the corresponding vectors in the original matrix? FIND THE CORRESPONDING VECTORS. 

Note: S is the vectors you want the basis of lined up. Remember B = S^-1*A*S. 

When finding matrix B of linear transformation A with respect to basis *B, just apply the matrix A to each of the basis vectors in *B, put the result in the basis form, and then put the result all together in a matrix. 

Say you’re trying to set up a basis that runs along a line L, well you should need to have one basis vector that’s equal to the line L, and all the others must be perpendicular to L. To do projection onto L, you should just have to drop out everything except that first column (v1)? 

For are two matrices similar problem, call one A and one B, we must have a matrix S such that AS = SB. So give S variables as entries and then do the multiplication on both sides and that should describe a matrix that can be used. You must also check to make sure this matrix is invertible for them to be similar 

For checking if a matrix A is similar to B, we must have a matrix S such that AS=SB or B=S^-1AS. This matrix must be invertible (nonzero determinant). Set it up by doing matrix multiplication against a matrix S that contains only variables. 

For figuring out a good basis to do a given linear transformation in (that is, a basis where the given transformation is diagonal), we must choose v1 and v2 to be vectors such that T(v1) and T(v2) are scalar multiples of their inputs. This wa
y we can have a diagonalized matrix. 

Determinant of 2x2 matrix is 1/det(A) times 
d -b 
-c a


Things to do: 

Invertible if determinant does not equal zero 

Know all the numbers about rank and dim and stuff. dim(ker A) = nullity. dim(image A) = rank. 
	for an n x m matrix: (nullity of A) + (rank of A) = m

Know how to do problem 54 in section 3.2!! 

30 IN 3.3