All Pairs Bottleneck Paths and Max-Min Matrix Products in Truly Subcubic Time

In the all pairs bottleneck paths (APBP) problem, one is given a directed graph with real weights on its edges. Viewing the weights as capacities, one is asked to determine, for all pairs (s,t) of vertices, the maximum amount of flow that can be routed along a single path from s to t. The APBP problem was first studied in operations research, shortly after the introduction of maximum flows and all pairs shortest paths. We present the first truly subcubic algorithm for APBP in general dense graphs. In par- ticular, we give a procedure for computing the (max,min)-product of two arbitrary matrices over R ( {¥, ¥} in O(n 2+w/3 ) O(n 2.792 ) time, where n is the number of vertices and w is the exponent for matrix multiplication over rings. Max-min products can be used to compute the maximum bottleneck values for all pairs of vertices together with a "successor matrix" from which one can extract an explicit maximum bottleneck path for any pair of vertices in time linear in the length of the path.


Introduction
In recent years, researchers have found surprisingly strong connections between the complexity of fundamental graph problems and the complexity of matrix multiplication over a ring.Much of the prominent work in this area [22,12,24,30] has developed fast algorithms for certain interesting cases of the all pairs shortest paths (APSP) problem in truly subcubic time, i. e., O(n 3−δ ) for some constant δ > 0, where n is the number of vertices in the graph.Still, it remains to be seen if the general APSP problem can be solved in truly subcubic time.Several algorithms have been given for solving APSP in n 3−o (1)  time; the most recent development is by Chan [2] and runs in O(n 3 log log 3 n/ log 2 n) time.
While we are still unable to give a bona fide subcubic algorithm for APSP, we do present such an algorithm for an intimately related problem, all pairs bottleneck paths (APBP).In this problem, one is given a directed graph with (arbitrary) real edge weights representing capacities, and the problem is to report, for all pairs (s,t) of vertices, the maximum amount of flow that can be routed from s to t along any single path.(This amount is given by the smallest weight edge on the path, a. k. a. the bottleneck edge.)Our algorithm for APBP runs in O(n 2+ω/3 ) ≤ O(n 2.792 ) time, where ω is the exponent of matrix multiplication over a ring.We can also obtain explicit maximum bottleneck paths: after Õ(n 2+ω/3 ) preprocessing, we can return an explicit simple maximum bottleneck path between any pair of vertices s,t in O( ) time, where is number of edges in the returned path.That is, the algorithm can be used to efficiently find maximum bottleneck paths as well.
The APBP problem has been studied alongside APSP in several contexts.Pollack [21] introduced APBP (calling it the maximum capacity route problem), and showed how the cubic APSP algorithms of that time could be modified to solve it.Hu [13] proved that in undirected graphs, APBP can be solved in O(n 2 ) time by simply taking the paths in a maximum spanning tree.Therefore the problem on undirected graphs can actually be solved in O(n 2 ) time, which is optimal.The directed case of the problem has remained open until now, and recently appeared as an explicit goal in Shapira et al. [23].Prior to our work, the fastest algorithm for general APBP used Fredman and Tarjan's implementation of Dijkstra's algorithm [9] on all nodes, in O(mn + n 2 log n) time, where m and n are the number of edges and nodes in the graph, respectively.
A problem related both to APSP and APBP is the all pairs bottleneck shortest paths problem (APBSP), first considered by [23].Consider a scenario in which we want to get from location u to location v in as few hops as possible, and subject to this, we wish to maximize the flow that we can route from u to v.In other words, we want to compute for each pair of vertices, the shortest (unweighted) distance d(u, v) and the maximum bottleneck weight b(u, v) of a path of length d(u, v) from u to v. Shapira et al. [23] gave a truly subcubic algorithm for APBSP in the node-weighted case.We show that the more general edge-weighted case can also be solved in subcubic time.Our solution runs in Õ(n 15+ω 6 ) ≤ O(n 2.896 ) time.Our method for APBP and APBSP is based on a new O(n 2+ω/3 ) algorithm for computing the (max, min)-product of two n × n matrices with arbitrary entries from R ∪ {∞, −∞}.
for all i = 1, . . ., n and j = 1, . . ., m.This is the ordinary matrix product over the (max, min) semiring with entries from R ∪ {∞, −∞}, and it is the natural generalization of the Boolean matrix product to totally ordered sets of arbitrary size.Besides its importance in flow problems, the (max, min)-product is also an important operation in fuzzy logic, where it is known as the composition of relations ([7], pg.73).The ideas behind our (max, min)product algorithm use ingredients from the dominance approaches of prior work; for more details, see Section 3.
Throughout this paper we use the standard addition-comparison computational model, along with random access to registers.In the algorithms of this paper, the only operations we actually use on real numbers are comparisons between them.

Related work
In addition to the work mentioned above, there are a few other interesting results on APBP that deserve mention.Karger et al. [16] show that any "path comparison" algorithm (that only accesses edge weights by comparing the weights of two different paths) requires Ω(n 3 ) time to compute both APSP and APBP.By way of fast matrix multiplication, our algorithm performs comparisons on rather unrelated pairs of edges, circumventing the above lower bound.Subramanian [26] proved that on random (Erdős-Rényi) graphs, both APBP and APSP can be solved in O(n 2 log n) time.
Very recently, Shapira et al. [23] have given algorithms for APBP in the special case where the vertices have weights, but not the edges.Their algorithms run in O(n 2.58 ) time and use fast rectangular matrix multiplication [4,14].Note that if ω = 2, then their algorithms can be implemented to run in roughly O(n 2.5 ) time.Note that the vertex-weight case can be easily reduced to the edge-weight case, by setting the weight of an edge to be the minimum weight of its two endpoints.Their algorithm relies on the linearity of the number of weights.As the number of weights in the vertex-weight case is only n, but the number in the edge-weight case can be Ω(n 2 ), their techniques do not seem to apply to the latter case.The authors of [23] also stated the goal of finding a truly subcubic algorithm for (max, min) matrix product as an open problem, which we resolve in this paper.

Preliminaries
For every graph (V, E) in this paper we let n = |V | and m = |E|.We refer to the elements of V as nodes and vertices interchangeably.We refer to the elements of E as edges.Without loss of generality, the graphs in this paper are weakly connected, so that m ≥ n − 1.
We use M T to denote the transpose of a matrix M. As is common, we define ω ≥ 2 to be the infimum of all real numbers s such that matrix multiplication over a ring is in O(n s ) arithmetic operations.The best known upper bound for ω is < 2.376, given by Coppersmith and Winograd [5].
We use a special matrix product in our algorithms, first defined by Matoušek [19].
Definition 2.1.Given two n × n matrices A and B over a totally ordered set, the dominance product is the n × n matrix C = A B defined by For technical reasons, we use the following definition of weight functions.
Definition 2.2.An edge-weighted graph G = (V, E, w) consists of a directed graph (V, E) and a weight function w : V ×V → R ∪ {−∞, ∞} with the following properties for all u, v ∈ V : It is obvious that a standard weight function from E to R can be uniquely extended to a weight function in the above sense.Definition 2.3.Given an edge-weighted graph G = (V, E, w), a bottleneck edge of a path between vertices u and v is a smallest weight edge on that path.A maximum bottleneck path between u and v is a path whose bottleneck edge weight is equal to the maximum of the bottleneck edge weights of all paths from u to v.

The dominance approach
We begin by revisiting an approach used by Chan [3] and Vassilevska and Williams [28] to find improved algorithms for all pairs shortest paths and maximum node-weighted triangles, respectively.In this approach, one reduces a weighted graph problem to the dominating pairs problem from computational geometry, then uses a fast algorithm for that problem.The dominating pairs problem gives a set X of n points in k-dimensional space, and the task is to compute all pairs (x, y) where x, y ∈ X and x[i] ≤ y[i] for all coordinates i.
Let M X be the n × k matrix whose rows are the points of X.One way to determine dominating pairs is to compute the dominance product of M X and M T X , as defined in Section 2.Then, (M X M T X )[i, j] = k if and only if (i, j) is a dominating pair.The best known algorithm in terms of n for the dominance product of two n × n matrices is due to Matoušek [19].Theorem 3.1 (Matoušek [19]).The dominance product of two n × n matrices A and B with entries from a totally ordered set is computable in O(n 3+ω 2 ) time.
A nice advantage of the dominance approach is that sums of pairs of elements can be quickly compared to a global constant, which is useful in some weighted graph problems.For example, suppose we are given a constant K and an edge-weighted graph G = (V, E, w) where w : E → R, and we want to compute for all pairs of vertices i, j whether there is a path of the form i → k → j of total sum at least K. Then one can set up matrices A and B so that Then (A B)[i, j] = 0 if and only if there is a k for which (i, k), (k, j) ∈ E and K − w(i, k) ≤ w(k, j), i. e., w(i, k) + w(k, j) ≥ K.In this paper, we find a new application of the dominance approach, culminating in a genuinely subcubic algorithm for APBP.
In our applications that use a dominance product, we shall only want to perform comparisons with certain entries of the matrices.For example, suppose matrices A and B are over R ∪ {∞}, such that A has mostly ∞ entries, while B has mostly finite entries.Then, in the computation of the dominance product A B, many of the comparisons (A[i, k] ≤ B[k, j]) are false; it only makes sense to compare the finite entries of A with entries in B. To this end, we design a special algorithm for dominance product, in the case where one wishes to ignore large portions of the matrix A.
Theorem 3.2 (Sparse Dominance Product).Let A and B be n × n matrices with entries from a totally ordered set.
There is an algorithm SD that, given A, B, and S, outputs Proof.Call the entries of A with coordinates in S the relevant entries of A. For every j = 1, . . ., n, let L j be the sorted list containing the relevant entries from A in column j, along with the entries from B in row j.Let g j be the number of relevant entries of A in L j , for all j.Clearly, ∑ j g j = m.Pick a parameter r and partition each L j into r consecutive buckets, such that every bucket contains at most g j /r relevant entries of A. Note that the bucket sizes are not necessarily uniform.
For every bucket number b = 1, . . ., r, create Boolean matrices A b and B b : For each bucket number b, compute C b = A b × B b (where × is matrix multiplication over the integers).This step takes O(rn ω ) time and computes for every pair i, k and bucket number b, the number of j such that Initialize an n × n matrix D to be all zeroes.In every bucket b of L j , there are at most g j /r relevant entries of A and some number t jb of entries from B. Compare every A-entry with every B-entry . Over all j and b, this takes time on the order of After all buckets of all lists are processed, where A[i, j], B[ j, k] are in the same bucket of L j .
Finally, set C = ∑ r b=1 C b + D. It is easy to verify from the above that the algorithm returns the desired C. The overall runtime of the above procedure is We can give a slightly more general result using a lemma by Huang and Pan [14].Fix r ∈ (0, 1].Define ω r to be the infimum of all real numbers s such that multiplication of an n × n r matrix and an n r × n matrix over a ring can be done in O(n s ) arithmetic operations.Observe that since the product of an n × n r and n r × n matrix can be computed with n 2(1−r) products on pairs of n r × n r matrices.Building on Coppersmith [4], Huang and Pan proved: where ω is the n × n matrix multiplication exponent.

Corollary 3.4.
There is an algorithm for sparse dominance product, where S A and S B are subsets of [n] × [n], and the resulting matrix has C We can compute the product of C and C in O( The final runtime is then Lemma 3.3, where the current best value for α is < 0.294.In the first case, we set D = n and the final runtime is O(n ω ).In the second approach, the best value for D is

All Pairs Bottleneck Paths
Armed with the sparse dominance product algorithm, we now turn to the all pairs bottleneck paths problem.We first show how to compute the (max, min)-product of matrices in truly subcubic time.Just as the (min, +)-product (or distance product) can be used to find all pairs shortest paths [1], the (max, min)-product gives a way to solve APBP.

Max-Min Product
Recall that the (max, min)-product of two matrices A and B is defined to be the matrix C such that C[i, j] = max k min{A[i, k], B[k, j]}.Clearly, the (max, min)-product of two matrices A and B can be modeled by an APBP computation on a three-layered graph, where the edge weights from the first to the second layer come from A and the edge weights from the second to the third layer come from B. Moreover, Corollary 4.2 states that APBP on an n-vertex graph can be solved in roughly the time it takes to compute a (max, min)-product of n × n matrices.This result follows from a more general result (Theorem 4.1) for closed semirings due to due to Fischer and Meyer [8], Furman [10,11], and Munro [20].
A closed semiring is an algebraic structure weaker than a ring, so that (R, ⊕, , 0, 1) is a closed semiring if all of the following conditions hold: (1) R is a set with 0, 1 ∈ R which is closed under the binary operations ⊕ and ; (2) ⊕ is commutative, associative, idempotent, and 0 is an identity under ⊕; (3) is associative, distributes over ⊕, and 1 is an identity under ; (4) 0 is a multiplicative annihilator: ∀x ∈ R : x 0 = 0 x = 0; (5) finally, there is a unary operation * so that a * = 1 ⊕ (a a * ) for all a ∈ R. The transitive closure of a square matrix A over a closed semiring R is always well-defined as the solution A * to A * = I ⊕ (A A * ), where I is the identity matrix over R, and the ⊕ and operations on two matrices X,Y are given by ( Under these operations, the set of n × n matrices over a closed semiring R is also a semiring, with the identity and zero matrices playing the roles of 1 and 0. Theorem 4.1 ([8, 10, 20], [1], pp.204-206).If the product of two arbitrary n × n matrices over a closed semiring R can be computed in M(n) time so that M(2n) ≥ 4M(n), then there exists a constant c such that the time T (n) to compute the transitive closure of an arbitrary n × n matrix over R satisfies T (n) ≤ cM(n).
Since (R, min, max, ∞, −∞) is a closed semiring (it is also known as the subtropical semiring), we immediately obtain the following corollary.
Corollary 4.2.Let M(n) be such that M(2n) ≥ 4M(n).If the (max, min)-product of two arbitrary real n × n matrices is computable in M(n) time, then the all pairs bottleneck paths problem on an n-vertex graph can be solved in O(M(n)) time.
We note that the condition M(2n) ≥ 4M(n) is not really restrictive.Since we need to write the output, We now show how to compute the (max, min)-product in truly subcubic time, using the sparse dominance algorithm combined with another idea.Proof.We first compute for every pair i, j, the maximum storing the results in a matrix A .Afterwards, we reverse the roles of A and B, computing for every pair i, j, the maximum B[k, j] (over all k) such that B[k, j] ≤ A[i, k], storing the results in a matrix B .Then we take Since the above two cases (of computing A and B ) are symmetric, it suffices to show how to compute A , where To do this, we employ a strategy similar to one used to obtain maximum witnesses for matrix multiplication [17].In particular, for each i, j = 1, . . ., n, we "narrow down" the possible choices for an to one of g possible entries.This is done by a careful application of O(n/g) sparse dominance products, in O(n 2+(ω/2) / √ g) time.Then for each i, j, we directly check which of the g possible entries are valid, if any.This takes O(n 2 g) time.Choosing g optimally results in a subcubic time bound.For every row i of matrix A, make a sorted list R i of the entries in that row.Pick a parameter g.Partition the entries of each sorted list R i into buckets, so that for every R i there are n/g buckets with at most g entries in each bucket.For every bucket value b = 1, . . ., n/g , compute C b = SD(A, B, S b ), where SD is the sparse dominance product from Theorem 3.2 and Notice that for every bucket value b, we have |S b | ≤ ng.By Theorem 3.2, all matrices C b can be computed in Now for every pair i, j, we determine the largest bucket b i, j in R i for which there exists a k such that (This is obtained by taking the largest b i, j such that C b i, j [i, j] = 0. Note we can easily compute b i, j during the computation of the C b .)For every i, j, we then examine the entries in bucket b i, j of R i to obtain the maximum A[i, k] (and hence the corresponding k) Since there are at most g entries in a bucket, each pair i, j can be processed in O(g) time.Therefore, this last step takes O(n 2 g) time.To pick a value for g that minimizes the runtime, we set n 2 g = n 2+ω/2 / √ g, Plugging in the best known value for ω by Coppersmith and Winograd [5], the runtime bound becomes O(n 2.792 ).

Computing explicit maximum bottleneck paths
By Corollary 4.2 we can obtain a matrix representing all pairs bottleneck path weights in an edgeweighted graph in O(n 2+(ω/3) ) time.To be able to compute actual paths, a bit more work is necessary.Since paths can have linear length in general, listing the optimal paths between all pairs of vertices might require cubic time.To tackle this hurdle, we proceed as is common in the shortest paths literature: instead of explicitly representing the optimal paths, we store an n × n matrix of successor nodes, so that given this matrix, a maximum bottleneck path P between any pair of vertices can be recovered in time linear in the length of P.
To build the successor matrix, we take an approach analogous to that used by Zwick [30] in solving the APSP problem.First, we compute APBP by repeatedly squaring the original adjacency matrix via (max, min)-product, instead of the approach in Aho et al. [1].We also record, for every pair of nodes i, j, the last iteration T [i, j] of the repeated squaring phase in which the bottleneck edge weight was changed, together with a witness vertex w i j on a path from i to j, provided by the (max, min)-product computation in that iteration.
Given an iteration matrix T and a witness matrix w i j (derived from a shortest path computation), Zwick [30] gives a procedure which computes a matrix of successors in O(n 2 ) time, and another procedure that, given a matrix of successors and a pair of nodes, returns a simple shortest path between the nodes.Applying his procedures to our setting, we get simple maximum bottleneck paths.The major the path from S[i, j] to j.This clearly takes time linear in the length of the path.

All Pairs Bottleneck Shortest Paths
We first recall the well-known short-path-long-path method [30,12,2].This method is quite general and is used to tackle various all pairs path problems.

The short-path-long-path method
All pairs path problems can often be viewed as problems of computing the transitive closure of a given (adjacency) matrix over a given algebraic structure.The given algebraic structure consists of a set of elements R and two binary operations ⊕ : R × R → R and : R × R → R (where ⊕ is commutative) so that the (⊕, ) product of two n × n matrices A and B is well-defined as For instance, APSP is the problem of computing the transitive closure of a nonnegative real matrix with 0s on the diagonal, over the (min, +)-semiring (also called tropical); APBP is the problem of computing the transitive closure of a real matrix with ∞s on the diagonal over the (max, min)-semiring (also called subtropical).The short-path-long-path method is particularly useful in cases when the operation in the underlying algebraic structure is not necessarily associative or commutative, or if it does not fully distribute over ⊕ (see e. g. [27]).In such a case, computing the transitive closure of an n × n matrix seems to require n iterations of the product (hence taking Ω(n 3 ) operations), whereas computing the transitive closure over a semiring, for instance, can be done in asymptotically the same time as computing the matrix product [8].The method also applies when the all pairs path problem is not necessarily a transitive closure problem, but for which computing n (⊕, ) matrix products can be used to solve the problem (see e. g. [2]).This is the case for the all pairs bottleneck shortest paths problem.
In the method one first chooses a parameter < n and iteratively computes the underlying (⊕, ) matrix product on pairs of matrices, obtaining each consecutive pair from the previously computed products.This computation allows one to obtain best paths of length at most between all pairs of vertices, possibly collecting other information to be used later.A common instance of the product iteration phase is just computing the -th power of the adjacency matrix.This first phase of the method takes O(M(n) ) time, where M(n) is the time to compute the particular matrix product.
After this the following lemma is used to obtain in O(n 2 ) time a set of O((n log n)/ ) vertices hitting all shortest paths between pairs of vertices at distance .The lemma follows directly from the analysis of the greedy algorithm for hitting set.Lemma 5.1 ([18, 15, 25]).Given a collection of N subsets of {1, . . ., n}, each of size , one can find in O(N ) time a set of at most n(1+ln N) elements of {1, . . ., n} hitting every one of the subsets.
An easy way to achieve the above performance bounds by a randomized algorithm is to sample a set of (cn ln N)/ elements from {1, . . ., n} independently, uniformly at random for a constant c > 1.By a standard argument one can show that such a sample is a hitting set with probability at least 1 − 1/N c−1 (cf.[30]).
After obtaining the hitting set, one argues that for any pair of vertices a best path of length ≥ (if one exists) must contain a node from the hitting set.One designs an algorithm for the single source version of the path problem, running in, say, T (n) time.Then one runs this algorithm from all nodes in the hitting set in O((nT (n) log n)/ ) time (possibly in both directions).Finally, one combines the results in O((n 3 log n)/ ) time by considering every pair of nodes and every possible midpoint from the hitting set.
Suppose T (n) = O(n 2 ) as is with most problems to which Dijkstra's algorithm applies.Then the overall running time is minimized when and the runtime becomes O n 1.5 M(n) log n .

An algorithm for APBSP
We first give an algorithm for the single source version of the bottleneck shortest paths problem (SSBSP).
Lemma 5.2.SSBSP on a graph with m edges and n nodes can be solved in O(m + n) time.
Proof.The algorithm is an adaptation of breadth-first search.Let G = (V, E, w) be the given edgeweighted graph and s the source node.We will maintain a set Q i which at each stage i will contain nodes at (unweighted) distance i from s.The set needs to support insert, pop an element, go through the elements one by one.A linked list suffices to support all of these operations in O(1) time.Every node v in the graph has a bit visited(v) which is set if and only if an edge from an in-neighbor of v to v has been traversed.Node v has values d(v) and b(v) associated with it.Value d(v) will be the (unweighted) shortest distance from s to v and b(v) will be the maximum bottleneck edge on a shortest path from s to the v. Originally, d(v) = ∞ for v = s and d(s) = 0, b(v) = −∞ for all v = s and b(s) = ∞, visited(v) = 0 for all v = s, visited(s) = 1.
We begin by inserting each out-neighbor v of s into Q 1 .We set d(v) = 1, b(v) = w(s, v) and visited(v) = 1.We then process Q 1 .
To process Q i , repeat: pop a node v from Q i ; for all out-neighbors u of v: • if u is not visited, insert u into Q i+1 , set d(u) = i + 1, b(u) = max{b(u), min{b(v), w(v, u)}}, and visited(u) = 1.
Correctness follows by induction: if the bottlenecks for nodes in Q i−1 are correct, then since every path of length i from s to u must be of the form P sv followed by (v, u) where P sv is a path of length log n) = O(n 2.896 ) .

Conclusion
We have provided the first truly subcubic algorithms for all pairs bottleneck paths and all pairs bottleneck shortest paths in general dense graphs, with no restrictions on edge weights or edge directions.Our approach combines several different ingredients from past work, along with a few new ideas, to reduce the problem of computing the (max, min) matrix product to a small collection of 0-1 matrix products.Timothy Chan (personal communication) has observed that the running time of our algorithm can be slightly improved (from n 2.792 to n 2.781 ) by using fast rectangular matrix multiplication [4,14].More recently, Duan and Pettie [6] have extended our techniques to show that the (max, min) matrix product can be computed in O(n (3+ω)/2 ) = O(n 2.688 ) time, the best known time for computing the dominance product.It is still an open problem whether the dominance or (max, min)-products can be computed in O(n ω ) time.
The most pressing question from our work is if the ideas from our (max, min) matrix product algorithm can be extended further to obtain a O(n 3−δ ) algorithm for the (min, +) matrix product (that is, the distance product).Note we already know that the dominance approach can be used to obtain the k most significant bits of the distance product in O(2 k n (3+ω)/2 ) time [28].An affirmative answer would immediately imply a truly subcubic APSP algorithm for general graphs, resolving a longstanding and prominent open problem.

Definition 1 . 1 .
The (max, min)-product of an n × matrix A and an × m matrix B is the n × m matrix C = A B such that THEORY OF COMPUTING, Volume 5 (2009), pp.173-189 THEORY OF COMPUTING, Volume 5 (2009), pp.173-189

Proof. Suppose A has m 1
relevant entries and B has m 2 .Sort each column k of A and row k of B together.For each k, let L k be the sorted list for column/row k.Let D be a parameter to be chosen later.For every column/row k, let m 1k be the number of relevant entries of A in L k , so that ∑ k m 1k = m 1 .Bucket each list L k into consecutive buckets, so that each bucket (except for the last one) has m 1 /D relevant elements of A. Compare elements within bucket b of list k in ∑ b g bk m 1 /D where g bk is the number of relevant B-elements in bucket b of L k .Overall, the runtime is O(m 2 + m 1 m 2 D ).To handle comparisons between buckets we do the following.Create matrices C and C where C is n × O(D) and C is O(D) × n.The columns of C and rows of C have indices (k, b) for bucket b of L k , provided L k has at least 2 buckets.We set C and when we sum these we always count different comparisons.The number of coordinates (k, b) is at most

Theorem 4 . 3 (
Max-Min Product).Given two n × n matrices A and B, the matrix C with C[i, j] = max k min {A[i, k], B[k, j]} can be computed in O(n 2+(ω/3) ) time.Moreover, for each pair of indices i, j, the algorithm returns an index k satisfying min{A[i, k], B[k, j]} = C[i, j].