Notes for my data structures final, now taught as COMP 210.

read more
Everything we covered: 

Efficient implementation of ADTs 
	Math background
	Proofs - by induction 
	Recursive programs and their analogies 

Linear ADT’s - lists, stacks, queues, dequeues
	array and linked implementations 

Priority Queues 
	heap implementation 

Sorting - run time - O(n), O(nlogn) memory - in place / not 
	Heapsort 
	Insertion sort
	Merge sort 

Dynamic Dictionaries 
	Binary trees 
	BST
	Balanced BST’s 
		AVL tree 
	Hash Tables 
		Chaining 
		Probing - quadratic, double hashing, cuckoo hashing 

Graphs 
	Definitions
	Representation 
		Adjacency Matrix 
		Adjacency List 
	Topological sort/cycle detection



Needs some review:

O(n) is similar to <= (grows no faster than a given function. So this is an upper bound.)

Omega(n) is similar to >= (grows at least as fast as a given function. So this is a lower bound. )

Theta(n) is similar to == (grows at the same rate. This is true if both O(n) and Omega(n) is true). 

Secondary hashing is when you have another has function, and if you hit a collision, you hash in accordance to i*that hash function where i increments. It’s sort of like probing but with a hash function as the basis instead of just a number. 

Cuckoo hashing is having the items bounce back and forth between two tables, based on independent hash functions. 

An algorithm for top sort is just start with the node that doesn’t depend on anything, and then do it, and then decrement the in degrees of all the nodes that it connects to, and then do it again until everything’s done. 

————————————————————————
How to do rotations? Just memorize. 

Primary and secondary clustering, how to think about it?

Heaps: where you have every node is smaller than its children. Fill in from left to right, making switches when appropriate. When you delete from the top (which is the only place you can delete from), you take the last node that you put in, put it at the top (in the new hole), and then percolate down.