Table of Contents
Entry One
Chapter 1.1
The Stable Matching Problem - Gale and Shapely
- design an admissions/recruiting process that was self-enforcing
- based on matching preference lists
- if the process is based on self interests, then there will be situations where people will commit and possibly go back on their commitments if a better offer comes along (retract and redirect)
- question asked: given a list of preferences can we assign applicants to employers so at least one of the following is the case
- the employer prefers one of its accepted applicants to another
- the applicant prefers her current situation over working for another employer
- Formulating the Problem
- to eliminate complications we assume that each n applicants applies to each n companies and each company accepts one applicant → we will use the case of devising a system of matching males and females for marriage
- a matching S is a set of ordered pairs from the sets of M men and W women where each member of M and W appears in at most 1 pair
- a perfect matching S' is a matching where each member of M and W appears in exactly one pair in S'
- there is neither single hood nor polygamy
- each M and W creates a preference list where they rank the opposite gender
- instability a pair is unstable if one of the pair prefers another male/female
- it is possible for an instance to have more than one stable matching
- Designing the Algorithm
- women will naturally accept an engagement even if she prefers another male
- if a free m proposes to a woman w who is already engaged to m', she may accept the proposal from m if he ranks higher than m' on her preference list
- the algorithm will stop when everyone is engaged.
- Analyzing the Algorithm
- from the view of the woman
- once w is engaged she will remain engaged to a sequence of men who get higher on her ranking list as they propose
- from the view of a man
- as m proposes, the sequence of women he proposes to gets lower on his list of ranked women
- maximum iterations needed for termination is n^2
- PROOF:
- measure the progress- way of saying each step of the algorithm brings it closer to termination
- each iteration is a man proposing to a woman he has never proposed to before → P(t) denotes the set of pairs (m,w) where m has proposed to w by the end of the iteration t
- there are only n^2 possible pairs of men and women total so P() can only increase n^2 times over the whole algorithm
- How do we show that the set S at the end of termination is a perfect matching?
- if a man is free at some point during the algorithm then there is a women he has not proposed to
- proof by contradiction
- the set of engaged pairs always forms a perfect matching because the algorithm cannot terminate with a free man
- How do we prove that the algorithm returns a set of stable matching?
- we already know that S is a perfect matching so by way of contradiction we prove that S is a stable matching. (if m did not proposed to w then w' must be higher on his preference list)
- there is an unfairness in the algorithm that favors male preferences over female
- question do all executions of the algorithm yield the same matching? … YES!
- characterize the matching by showing each man gets the “best possible partner”
- a woman, w, is a valid partner for m if there is a stable matching with the pair (m, w)
- the order of the proposals in the algorithm has no effect on the final outcome
- PROOF
- by way of contradiction prove that S' is stable
- in this stable matching each woman is paired with her worst valid partner
CHAPTER TWO
Computational Tractability
- our goal is to find efficient algorithms for computational problems! We will focus on running time efficiency but will also note algorithms efficiency in space and memory
- What does efficiency mean?
- an algorithm is efficient if, when implemented, it runs quickly on real input instances
- things we are missing: where and how well the algorithm is implemented and a “real” input instance or the full range of input instances
- we want efficiency to be platform-independent, instance-independent and of predictive value
- Worst-Case Running Times
- we analyze the worst-case running time by finding the largest possible running time the algorithm could have over all inputs of a given size
- Why do we use this instead of average case?
- it is hard to determine how to randomly generate input instances
- sets of random input could perform very poorly compared to another set– tells us more about the random generation instead of the algorithm
- How do we decide if the worst-case analysis running bound is impressive?
- by comparison with brute-force search over the space of possible solutions
- there is a natural brute-force search algorithm that checks every possible solution
- since this takes exponential time it is unacceptable
- we offer a new definition of efficiency
- an algorithm is efficient if it achieves qualitatively better worst-case performance at an analytical level, than brute-force search
- vagueness of “qualitatively better performance
- polynomial time
- we want a good algorithm to have a good scaling property.
- desirable scaling property
- if the input size increases by one the number of possibilities incases multiplicitively. We want better than this!! we want when the input size increases by a constant factor, the algorithm should only slow down by some constant factor C.
- we get our third definition of efficiency
- an algorithm is efficient if it has a polynomial running time
- our specific definition of efficiency is beneficial because it become negatable
- possible that there is no efficient algorithm for a particular problem
Asymptotic Order of Growth
- an algorithm's worst-case running time on inputs of size n grows at a rate at most proportional to some function f(n). so f(n) is a bound on the running time of the algorithm.
- we want to express growth rate of running times in a way that is insensitive to constant factors and low-order terms
- Asymptotic Upper Bounds
- T(n) is O(f(n)) if there are constants x>0 and nº >= 0 such that T(n) ⇐ x(f(n) for n>= nº. T is asymptotically upper-bounded by f.
- Asymptotic Lower Bounds
- we want to show that the upper bound is the best one possible
- T(n) is Ω(f(n)) if there are constants x>0 and nº >= 0 such that T(n) >= x(f(n) for n>= nº.
- Asymptotic Tight Bounds
- T(n) grows exactly like f(n) to within a constant factor
- T(n) = pn^2 + qn + r is O(n^2) and Ω(n^2)
- they characterize the worst-cast performance of an algorithm precisely up to constant factors.
- properties of asymptotic growth rates
- transitivity
- if a function f is asymptotically upper-bounded by a function g, and if g is asymptotically upper bounded by function h, then f is asymptotically upper-bounded by h.
- sums of functions
- f = O(h) and g = O(h) then f+g = O(h)
- the overall running time is the sum of two functions, results on asymptotic bounds for sums of functions are relevant
Asymptotic Bounds for Some Common Functions
- Polynomials
- their asymptotic rate of growth is determined by their term that determines the degree
- algorithms with running-time bounds O(n^2) or O(n^3) are polynomial-time
- O(n log n) is also polynomial time
- Logarithms
- Exponentials