Network flow stems from our original Bipartite Matching problem. This problem can model situations where objects need to be matched and when we need collections of pairs. We could also look for the largest matching set.
A flow network is a graph where each edge has an non-negative capacity associated with it. The is one source node and one sink node and these follow the standard definitions. Flow must not be greater than the capacity of the edge it travels on and it must also be conserved, that is, flow into a node must equal the flow out of a node (except for the source and sink nodes). The total value of the flow is the amount of flow out of the source node.
The Maximum-Flow problem asks us to find the greatest value of flow possible for a network such that the capacity and conservation are not violated. We say that the maximum flow from one set A into another set B must equal the minimum capacity of all edges that divide A and B. Because a natural greedy algorithm does not work, we need to find a new solution. We define a residual graph as the same sets of nodes as in the original graph, but with each edge's capacity decreased to that of the remaining capacity and reversed edge of the flow through the original edge. We check our residual graph for an augmenting path to see if we can add flow. We can run the Ford-Fulkerson algorithm on this problem in O(m+n) time.
Readability: 7/10
We need to show that the Ford-Fulkerson algorithm provides the maximum flow. We divide our graph into two sets of nodes such that the source is in set A and the sink is in set B. The capacity of a cut (A,B) is the total of all capacities on edges out of A into B. This is an upper bound on the flow, but we can find a tighter one. The upper bound on any flow is that of the capacity of every cut. The maximum value of any flow is that of the capacity of the minimum cut. We note that we assume capacities are integers. In fact, we can use real numbers.
Readability: 6/10
The number of augmentations is bounded by the total of all edge capacities. We can reduce the number of augmentations by using a scaling parameter. We look only at augmenting flows whose bottleneck capacity is at least this scaling parameter (a power of 2). Once we have no more augmenting paths using the current scaling parameter, we divide it by 2 to look at paths with smaller bottlenecks. We can run this version in O(m2log2C) time, which is much better than the O(mC) time we found for the version that does not use a scaling parameter for large values of C. There are also algorithms that can run in polynomial time.
Readability: 7/10