finding an augmenting path in a residual network

Here’s the full English translation and explanation of your technical passage on finding an augmenting path in a residual network:


General Steps to Find an Augmenting Path in a Residual Network $G_f = (V, E_f)$

Goal: Find a path from the source s to the sink t such that all edges on the path have positive residual capacity. Once found, you can push the bottleneck capacity along this path into the flow.


Forward residual capacity means how much additional flow can still be pushed through an original edge $(u, v)$ under the current flow $f$. It represents the remaining usable capacity of that edge.

Below are the standard steps to find an augmenting path in the residual network $G_f = (V, E_f)$ (based on the Ford–Fulkerson framework; you can use DFS or Edmonds–Karp’s BFS for the search):


1. Construct / Update the Residual Network

  • For each edge $(u, v)$ in the original graph:

    • Compute forward residual: $c_f(u,v) = c(u,v) – f(u,v)$
    • Compute reverse residual: $c_f(v,u) = f(u,v)$
  • Only keep edges where $c_f > 0$ in $E_f$.

2. Search for a Path from s to t in $G_f$

  • Use DFS (depth-first) or BFS (breadth-first)
  • Only follow edges with positive residual capacity
  • Track each node’s predecessor during the search to reconstruct the path

3. Check the Search Result

  • If no path is found, then no augmenting path exists, the algorithm terminates, and the current flow is maximum flow
  • Otherwise, record the path: $P = (s = v_0, v_1, \dots, v_k = t)$

4. Compute Bottleneck Capacity $\Delta$

$$
\Delta = \min_{i=0\dots k-1} c_f(v_i, v_{i+1})
$$
This is the maximum flow that can be pushed through this path during this iteration.


5. Augment the Flow Along Path $P$

For each edge $(u, v)$ in the path:

  • If it’s a forward edge (exists in the original graph):
    $$
    f(u,v) \gets f(u,v) + \Delta
    $$
  • If it’s a backward edge (only exists in the residual graph, i.e., we are undoing flow):
    $$
    f(v,u) \gets f(v,u) – \Delta
    $$

6. Update the Residual Network

  • Recalculate the residual capacities of affected edges
  • Add/remove residual edges as needed
  • Go back to Step 2 to search for the next augmenting path

Note

  • If you use BFS in Step 2 (i.e., Edmonds–Karp algorithm), you’re guaranteed polynomial time complexity: $O(VE^2)$.
  • If using DFS, it’s the original Ford–Fulkerson method, which may degrade to exponential time in the worst case.

Q&A

In the residual network, “only follow edges with residual capacity > 0” — regardless of whether they are forward or reverse.

  • Forward residual edge $(u,v)$: exists in the original graph and has $c_f(u,v) = c(u,v) – f(u,v) > 0$, meaning you can still push more flow to $v$.
  • Reverse residual edge $(v,u)$: may not exist in the original graph but appears because $f(u,v) > 0$, so $c_f(v,u) = f(u,v) > 0$, meaning you can cancel or undo flow.

In DFS or BFS, you explore all residual edges with $c_f > 0$, treating them like normal edges—no need to differentiate forward or backward during search. This is key to allowing both flow augmentation and flow cancellation flexibly from s to t.


Yes. In the residual network $G_f = (V, E_f)$, you add an edge to $E_f$ as long as its residual capacity is greater than zero, whether it’s:

  • a forward edge with $c_f(u,v) = c(u,v) – f(u,v) > 0$
  • or a reverse edge with $c_f(v,u) = f(u,v) > 0$

This ensures both forward and backward adjustments are supported during augmentation.


2. Search Strategies

Strategy Method Shortest Path Guaranteed? Worst-case Complexity
DFS (classic Ford–Fulkerson) Greedy depth-first on any feasible edge ✘ No; can degrade to $O(C·E)$ Unbounded (may time out)
BFS (Edmonds–Karp) Breadth-first, finds path with fewest edges ✔ Yes $O(VE^2)$
Layered BFS + DFS (Dinic) (1) BFS for level graph<br>(2) layered DFS ✔ Yes (shortest layers) $O(V^2E)$ or $O(E√V)$ for sparse graphs
Shortest augmenting path + Dijkstra Use edge costs in residual graph ✔ (Minimum cost) Varies by implementation

¹ Ford–Fulkerson may run slowly if capacities are real numbers and adversarially constructed, but with integer capacities, the number of augmentations ≤ total max flow $C$.


3. BFS-Based Augmenting Path – Pseudocode

function BFS_augmenting_path(G_f, s, t):
    parent = map()        // stores the path
    visited = set()
    queue = [s]
    visited.add(s)

    while queue not empty:
        u = queue.pop_front()
        for each (u, v) in E_f:
            if v not in visited and c_f(u, v) > 0:
                parent[v] = u            // record predecessor
                visited.add(v)
                if v == t:               // reached sink
                    return reconstruct_path(parent, s, t)
                queue.push_back(v)

    return null   // no s→t path ⇒ max flow reached
  • reconstruct_path: backtrack from t using parent to recover the full path to s, then reverse.
  • After getting the path, compute bottleneck $\delta = \min c_f(u,v)$ and push $\delta$ units of flow:

    • Forward edge: $f(u,v) += \delta$
    • Backward edge: $f(v,u) -= \delta$

4. When to Stop?

You stop when no more augmenting paths exist (i.e., BFS/DFS returns null). At that point, the current flow is the maximum flow.


Practical Tips

  1. Integer capacities: BFS (Edmonds–Karp) is simple and enough for classes or small-medium experiments.
  2. Large sparse graphs: Prefer Dinic’s algorithm, which is faster in practice.
  3. Minimum cost max flow: Add costs to residual edges and use Bellman–Ford (no negative cycles) or Dijkstra + potential function repeatedly.
  4. Implementation: Use adjacency lists for residual edges; avoid rebuilding the graph each time. For each original edge, store both forward and backward links to easily update flows and capacities.

TL;DR
Finding an augmenting path in the residual network = running a search (BFS, DFS, or Dijkstra) in a graph of all currently usable edges. Your choice of strategy affects runtime and complexity.

Leave a Reply