read more
Studying for Security Final

A principle is something that makes a statement. A key is an example of a principle. 

TCB (trusted computing base): the small amount on which we build our trust. So we want to minimize what is trusted. 

p.s means “who p calls s”. 
key(s) means promoting the key s to be a principal. So now it can have “says” statements. Essentially it could be read as “who owns the key s”. 

Computational infeasability: for every polynomial p, there exists an N such that for all lambda > N, v(lambda) < 1/p(lambda)  

Message Authentication Code (MAC). Symmetric.  
	, triplet of functions. 
	G outputs secret key. 
	T takes a key K and message m, and outputs a tag t. 
	V takes message, tag, and key K, and outputs a bit if tag was made with message. 
	Computationally infeasible to generate matching m,t even given verified pairs without knowing K. 

Digital Signatures. Asymmetric. 
	 
	G outputs public and private key. 
	S generates signature with private key and message. 
	V uses public key to verify message was created with private key. 
	Computationally infeasible to generate match without private key. 

Hash function: 
	Maps x to y using h(x). (output fixed bit length, input arbitrary)
	Preimage resistance: given y, computationally infeasible to find x where h(x)=y. 
	2nd pre: given an x, computationally infeasible to find another x’ such that h(x) = h(x’)
	Collision: given range of x,y, computationally infeasible to find any such that h(x) = h(y). 

K signed x, means that the message x has been signed by K. 

Roles and namespaces, sort of an abuse, but N.D means N running program D. Or N as D. 

Channel can “speaks for”, meaning messages over that channel should consider to be issued by who it speaks for. 

Trusted Processing Module (TPM): hardware component resistant to software and some hardware tampering. This is the running record of what ran on a machine, stored in PCRs. 

Secure boot: actually check the TPM values, PCRs, against expected values. 

Certificate Revocation Lists (CRLs): recanting all certificates whose private key could be compromised. 

Key length seems to double every 20 years. 

DNS Security (DNSSEC): keeping current way of DNS I think would need to have all root servers’ public keys hardware embedded. 

BGP False Origin: AS claims to be directly connected address block when it isn’t (it sort of owns that block it says). 
BGP False Path: AS claims to have a path it doesn’t have. 

Protected by IANA providing them a certificate that they own a certain address space, called RPKI. This prevents false origin. 
BGPSEC: False path can be protected by each AS signing the path they advertise. 

Goals of secure communication: 
	1. Confidentiality (secrecy)
	2. Integrity (data stays same)
	3. Identity (who you’re talking to) 

Existential Forgery: adversary forges signature of one message, not necessarily of his/her choice. 
Selective: adversary can forge signatures of some messages of choice. 
Universal: adversary can forge sig of any message. 
Total break: adversary knows private key. 

Chosen message attack: adversary gets access to messages of choosing and their sigs. 
Known message attack: adversary gets access to certain messages and their sigs? 

Pseudorandom function: adversary’s ability to distinguish between pseudorandom and random is bounded (see: computationally infeasible). 

Pseudorandom functions make good MACs. 

Counter Mode (think “counting”): given a K and your function, take a random initialization vector, r, run r+1 through f_K and xor it with message block 1, then do the same with r+2. 

CPA-secure: given an encryption oracle and test oracle, the adversary can’t predict which of two messages it passes the test oracle got encrypted (diff bounded above by v_A). Sort of have to think that between two runs, the encryption of the same thing has to change otherwise this isn’t satisfied. 

ECB encryption (Electronic Code Book): just run f_K on each block of the message and append the cipher texts. 

CPA-Secure: must be non-deterministic or stateful. 

Stateful counter mode: set your initialization vector to be something based on your first run of the encryption. Your first r should be inited randomly. 

Cipher Block Chaining: use the cipher block of the previous block in the xor of your message before running the result through the function. Initialization vector is inited randomly. 

CCA (chosen ciphertext attack). Given decryption oracle as well. Could break Counter and CBC with these oracles. 

Lunchtime attack: adversary can query decryption oracle only before padding test oracle. 
Side-channel: instead of access to decryption oracle, adversary has access to “side channel”, padding oracle. 

It is usually leaked if packet’s padding was valid or not, consider this a padding oracle. Padding oracles break CBC and counter mode, but you can do counter mode without padding. Public key systems are also vulnerable to padding attacks. 

Hierarchy of security protocol specifications: 
	Aliveness, if A initiates and completes run of protocol, apparently with B, then B was previously running protocol. 
	Weak agreement: same, but now B is apparently running it with A. 
	Non-injective: above, but A and B agree on the values in the protocol. 
	Agreement: all above but each run was unique (1 to 1 correspondence). 
	(strengthen all by adding “recent” on B’s part)

Be sure time is grouped with a nonce, otherwise it’s not useful for freshness. 

Type flaws: when one parameter in a message in confused as another, or multiple. 

Parallel session attacks: two or more protocol sessions are executed concurrently. Attacker can get answers by asking the same questions to the victim and sending the victim’s answers. Also can sort of be like man in the middling, where you run the protocol with the intended destination of the request but with different params, make the victim think the key they’re getting is someone else’s when it’s really yours. 

Bastion Host: computer that needs to be highly concerned because it’s in a vulnerable position, usually exposed to Internet. 
Dual-homed host: has two NICs, interfacing between two networks. 
Perimeter network (DMZ): network acting as buffer between internal network and wild internet. 
Choke router: gateway to interior network. 
Access router: gateway to perimeter network. 

Screened host: bastion host on internal network, with just a screening box between external and internal (no perimeter). 
Screened subnet: perimeter with bastion host, two screen routers. No central point of failure. 
Split screen subnet: dual homed host separating two parts of perimeter network. 
Independent screened subnets: allows redundancy, two portals to the outside world (each with perimeter and bastion host), and isolates traffic on each. ISP might use something like this, could have all inbound on one side and all outbound on the other.

Danger: multiple interior routers connected to the same perimeter means internal traffic could be routed over perimeter network. 


Second half: 


There is a “network header” and “network footer” which encapsulated the packet at the physical layer. 

Things on the transport layer use ports. 

ARP is network layer (not transport layer). 

Simple routed network: one router, tree structure going up into one port on the router. 
Multiple routed subnets: one router, tree structure going into multiple ports on router. 
Mesh topology: matrix (4 routers all connected to each other). Route calculations are used. Fault tolerance. 

Next generation firewall: DPI, packet signatures, application awareness, SSL decryption(?)

Cause of these buffer overflows is that many string ops in C, and C++ are unsafe. They don’t check the bounds when they make the copy. 

Heap smashing: really just changing your inputs to overwrite a variable to gain access. That does mean they have to find a variable to overwrite that’s security critical (harder). Did have to choose something downstream (higher memory addresses). 

Stack smashing: every program is going to have a stack because it’s added to when executing a function. “activation record” or “stack frame” is appended to the stack. Just need to find a way to overflow the return address, so find a str_copy or something that’s going to be within a method call. Also have to place hostile code in memory to which we can jump when the function returns. 

Old base pointer is right in front of (lower order bits) the return address of the function. 

Overwriting a local variable overwrites the return address of the function we’re in. Whilie overwriting a parameter overwrites the return address in the stack frame below us. 

Remember little endian order (when manually putting in an address, start with the low adder bits). 

Sometimes you have to fit the shell code after the return address if the buffer isn’t big enough. 

Countermeasures to overflows: 
	memory safe languages - prevents dangling pointers, which is to say if an object doesn’t exist it deallocates the pointer to the object. 
	bound checkers - adding more info to pointers, which indicates the lower and upper bounds of the thing the pointer references. When the pointer is used, check to make 			       sure it will not write beyond those bounds. 
	Randomization - introduce canaries, obfuscate memory addresses by xoring with random values. Attacker can just overwrite low order bits though if their target is close. 
	Separators/Replicators - could change directionalities of the stack and if the outputs don’t match don’t continue, could copy the return address elsewhere in memory so it 				       can’t be overwritten. 
	Virtual Memory Defenses - Make the text of the process read only. (not foolproof). 
	Execution monitors - Have policies like only certain things can be opened/is the system of system calls consistent with the program that was loaded? 
	Countermeasure - Don’t allow tainted data in trusted places (like return address)
	Hardened libraries - libraries that don’t allow for this kind of stuff. Never use gets(). 


Remember that it’s a stack, so it builds up from high addresses to low ones. 


Java security models: originally run inside a limited box. Code could be signed and then granted access to more resources based on the signer’s privs. 

Applets can attack: 
	System modification (delete files, change memory) 
	Invasion of privacy: steal confidential data. 
	DoS: exhaust computing resources, filling filesystem, etc. 
	Antagonism: annoy the user. 

Type safety: only objects referenced in the program can be accessed (no pointers essentially), and can only modify object if mod is valid. 

All this stuff describes the sandbox. You could code sign and allow more freedoms. 

Basic 3 components of Java security: 
	Bytecode verifier (check if the byte code is valid): help ensures type safety (only call methods on objects which have those methods)
	class loader: loads and unloads classes to/from the Java runtime. Part of job is to make sure parts of the Java run-time aren’t replaced by applet’s code. 
	security manager: guards potentially dangerous functionality Makes sure “dangerous methods” are checked before execution (at run-time), mainly by checking which           				 Class Loader loaded the class. 


Bytecode verifier does static checks (just making sure the methods on the objects that are being called are valid), and then runtime checks (loads defs of not yet loaded classes. If object doesn’t exist, throw exception). 

Prohibit users making their own class loaders which would implement their own security manager that just says yes to everything. 

Primordial class loader: get the Java environment setup/load API classes. I think written in C. Then it loads class loader objects which can be extended and load other classes. 

Applet’s get their own Class Loader namespace (can only see in their namespace). 

Class Loading procedure: essentially get it off the disk, read it into file as bytes load classes it references/extends, then check with the verifier. 

Security manager: prevents new class loaders, controls file system ops, control network sockets, etc. 

Stack inspection (priv flags set for system ops like file read): looks at the most recent first, and if call with “full” tag is found (not greyed out), then executes. 

Should probably just read about Java security outside the lectures. Get a more fundamental sense of it. 

Business assassin applet: bug allowed applets to kill each other. 

Type confusion: managing to get a pointer with one type actually pointing to an object of another type, such as the security manager, so you can mess with it as if it has public fields etc. When cycling through objects in an array, Java originally only checked the type of the first object. 

XSS allow dynamic creation. Cross-site because usually used to transfer info from the victim site to an attacker site, like sending a cookie as the URL. 
Reflected XSS -  resulting from parameter in a click
Stored - resulting from persistent data stored in database 
Dom-Based - using Javascript document.write / excluding certain parts of the URL like with # so that you can insert stuff. 

CSRF: generating a URL to get people to click on that will exploit the fact that their cookies will carry out that action. 


Problems with “The Anonymizer” at the browser level: would have to rewrite scripts and links to go through the anonymizer, essentially. 

Janus: generated pseudonyms, email addresses, and passwords for sites that require accounts. One way function generates email, password, and username from the Janus user’s email, pass, and domain. 

Mixes: they batch, change ordering, and encoding all to help keep anonymity. Timing attacks handled by batching. Encoding change just means having incoming messages encrypted with public key system. 

Remember you have to include a return path all nice and inserted in with your request, and encrypted layered in the reverse direction. 

Attacks: replay the same message through again (handled with timestamps). Bridging; attacker submits all but one mix message, pigeonholes. Solution could be dummy messages. Length guessing: could fix by padding. 

DC-Nets (only for small networks because n^2 sharing): everyone shares a secret key with everyone else. When someone decides to send a message, everyone agrees to XOR 0000 with every pair of keys they own, except the person who wants to send won’t send 0000 but instead the message they want to send. Everyone XOR’s the results and because there are even numbers of every key, only the message is left. Could implement this as a ring when they send their own XOR result along instead of each of them calculating it themselves. 

Theorem: some global observer knowing set of keys K can only know the parity of the message sent by nodes by which they don’t know. 

Crowds: everyone in the crowd runs an open proxy, and they make requests for each other. The requests get actually sent to the server 50% of the time, say, and the other 50% they get routed to another person in the crowd. Nobody knows request origins, then. Ideally you want 3/4 of the time to pass it on to someone else. 






















Main stuff to remember: 

A principle is something that makes a statement. A key/person/machine is an example of a principle. 

Computational infeasability: for every polynomial p, there exists an N such that for all lambda > N, v(lambda) < 1/p(lambda). Or: for every PPT, A, there is a negglibile v_A that bounds the chance, for lambda large enough. Where v_A(lambda) < 1/p(lambda) where p is all polynomials. 

Message Authentication Code (MAC). Symmetric.  

Digital Signatures Asymmetric 

TPM has the record of what ran stored in PCRs. 

Key length seems to double every 20 years. 

Existential Forgery: adversary forges signature of one message, not necessarily of his/her choice. 
Selective: adversary can forge signatures of some messages of choice. 
Universal: adversary can forge sig of any message. 
Total break: adversary knows private key. 

Chosen message attack: adversary gets access to messages of choosing and their sigs. 
Known message attack: adversary gets access to certain messages and their sigs? 

Counter Mode (think “counting”): given a K and your function, take a random initialization vector, r, run r+1 through f_K and xor it with message block 1, then do the same with r+2. 

CPA-secure: given an encryption oracle and test oracle, the adversary can’t predict which of two messages it passes the test oracle got encrypted (diff bounded above by v_A). Sort of have to think that between two runs, the encryption of the same thing has to change otherwise this isn’t satisfied. 

CPA-Secure: must be non-deterministic or stateful. 

Cipher Block Chaining: use the cipher block of the previous block in the xor of your message before running the result through the function. Initialization vector is inited randomly. 

CCA (chosen ciphertext attack). Given decryption oracle as well. Could break Counter and CBC with these oracles. 

Lunchtime attack: adversary can query decryption oracle only before padding test oracle. 
Side-channel: instead of access to decryption oracle, adversary has access to “side channel”, padding oracle. 

Hierarchy of security protocol specifications: 
	Aliveness, if A initiates and completes run of protocol, apparently with B, then B was previously running protocol. 
	Weak agreement: same, but now B is apparently running it with A. 
	Non-injective: above, but A and B agree on the values in the protocol. 
	Agreement: all above but each run was unique (1 to 1 correspondence). 
	(strengthen all by adding “recent” on B’s part)

Be sure time is grouped with a nonce, otherwise it’s not useful for freshness. 



Remember little endian order (when manually putting in an address, start with the low adder bits). 

Basic 3 components of Java security: 
	Bytecode verifier (check if the byte code is valid): help ensures type safety (only call methods on objects which have those methods)
	class loader: loads and unloads classes to/from the Java runtime. Part of job is to make sure parts of the Java run-time aren’t replaced by applet’s code. (such as replacing the security manager)
	security manager: guards potentially dangerous functionality Makes sure “dangerous methods” are checked before execution (at run-time), mainly by checking which           				 Class Loader loaded the class. 

Type confusion: managing to get a pointer with one type actually pointing to an object of another type, such as the security manager, so you can mess with it as if it has public fields etc. 


DC-Nets (only for small networks because n^2 sharing): everyone shares a secret key with everyone else. When someone decides to send a message, everyone agrees to XOR 0000 with every pair of keys they own, except the person who wants to send won’t send 0000 but instead the message they want to send. Everyone XOR’s the results and because there are even numbers of every key, only the message is left. Could implement this as a ring when they send their own XOR result along instead of each of them calculating it themselves. 

Theorem: some global observer knowing set of keys K can only know the parity of the message sent by nodes by which they don’t know. 

Crowds: everyone in the crowd runs an open proxy, and they make requests for each other. The requests get actually sent to the server 50% of the time, say, and the other 50% they get routed to another person in the crowd. Nobody knows request origins, then. Ideally you want 3/4 of the time to pass it on to someone else.