Deterministic Approximation of Random Walks in Small Space

We give a deterministic, nearly logarithmic-space algorithm that given an undirected graph $G$, a positive integer $r$, and a set $S$ of vertices, approximates the conductance of $S$ in the $r$-step random walk on $G$ to within a factor of $1+\epsilon$, where $\epsilon>0$ is an arbitrarily small constant. More generally, our algorithm computes an $\epsilon$-spectral approximation to the normalized Laplacian of the $r$-step walk. Our algorithm combines the derandomized square graph operation (Rozenman and Vadhan, 2005), which we recently used for solving Laplacian systems in nearly logarithmic space (Murtagh, Reingold, Sidford, and Vadhan, 2017), with ideas from (Cheng, Cheng, Liu, Peng, and Teng, 2015), which gave an algorithm that is time-efficient (while ours is space-efficient) and randomized (while ours is deterministic) for the case of even $r$ (while ours works for all $r$). Along the way, we provide some new results that generalize technical machinery and yield improvements over previous work. First, we obtain a nearly linear-time randomized algorithm for computing a spectral approximation to the normalized Laplacian for odd $r$. Second, we define and analyze a generalization of the derandomized square for irregular graphs and for sparsifying the product of two distinct graphs. As part of this generalization, we also give a strongly explicit construction of expander graphs of every size.


Introduction
Random walks provide the most dramatic example of the power of randomized algorithms for solving computational problems in the space-bounded setting, as they only require logarithmic space (to store the current state or vertex). In particular, since undirected graphs have polynomial cover time, random walks give a randomized logspace (RL) algorithm for UNDIRECTED S-T CONNECTIVITY [2]. Reingold [34] showed that this algorithm can be derandomized, and hence that UNDIRECTED S-T CONNECTIVITY is in deterministic logspace (L). However, Reingold's algorithm does not match the full power of random walks on undirected graphs; in particular it does not allow us to approximate properties of the random walk at lengths below the mixing time.
In this article, we provide a nearly logarithmic-space algorithm for approximating properties of arbitrary-length random walks on an undirected graph, in particular the conductance of any set of vertices: Definition 1.1. Let G = (V, E) be an undirected graph, r a positive integer, and S ⊆ V a set of vertices. The conductance of S under the r-step random walk on G is defined as where V 0 ,V 1 , . . . ,V r is a random walk on G started at the stationary distribution Pr[V 0 = v] = deg(v)/2|E|. Theorem 1.2. There is a deterministic algorithm that given an undirected multigraph G on n vertices, a positive integer r, a set of vertices S, and ε > 0, computes a number Φ such that and runs in space O(log N + (log r) · log(1/ε) + (log r) · log log r), where N is the bit length of the input graph G (the length of the description of G as a list of edges, in a binary encoding).
Previously, approximating conductance could be done in O(log 3/2 (N/ε) + log log r) space, which follows from the proof by Saks and Zhou that RL is in L 3/2 [37].
Two interesting parameter regimes where we improve the Saks-Zhou bound are when r = 1/ε = 2 O( √ log N) , in which case our algorithm runs in space O(log N), or when ε = 1/polylog(N) and r ≤ poly(N), in which case our algorithm runs in space O(log N). When r exceeds the poly(N) · log(1/ε) time for random walks on undirected graphs to mix to within distance ε of the stationary distribution, the conductance can be approximated in space O(log(N/ε) + log log r) by using Reingold's algorithm to find the connected components of G, and the bipartitions of the components that are bipartite and calculating the stationary probability of S restricted to each of these pieces, which is proportional to the sum of degrees of vertices in S.
We prove Theorem 1.2 by providing a stronger result that with the same amount of space it is possible to compute an ε-spectral approximation to the normalized Laplacian of the r-step random walk on G. Definition 1.3. Let G be an undirected graph with adjacency matrix A, diagonal degree matrix D, and transition matrix T = AD −1 . The transition matrix for the r-step random walk on G is T r . The normalized Laplacian of the r-step random walk is the symmetric matrix I − M r for M = D −1/2 AD −1/2 . Spectral approximation of the normalized Laplacian is a strong definition that guarantees the two matrices have similar eigenvalues, and their corresponding graphs have similar cuts and random walk behavior [39,9]. Note that the normalized Laplacian can also be expressed as I −M r = D −1/2 (I −T r )D 1/2 , so it does indeed capture the behavior of r-step random walks on G. 1 Theorem 1.4 (Main result). There is a deterministic algorithm that given an undirected multigraph G on n vertices with normalized Laplacian I − M, a nonnegative integer r, and ε > 0, constructs an undirected multigraph G whose normalized Laplacian L is an ε-spectral approximation of L = I − M r . That is, for The algorithm runs in space O(log N + (log r) · log(1/ε) + (log r) · log log r), where N is the bit length of the input graph G.
Theorem 1.2 follows from Theorem 1.4 by taking v to be D 1/2 e S where e S is the characteristic vector of the set S and normalizing appropriately. (See Section 5).
Our main technique for proving Theorem 1.4 is the derandomized product, a new generalization of the derandomized square, which was introduced by Rozenman and Vadhan [36] to give an alternative proof that UNDIRECTED S-T CONNECTIVITY is in L. Our main result follows from carefully applying the derandomized product and analyzing its properties with inequalities from the theory of spectral approximation. Specifically, our analysis is inspired by the work of Cheng, Cheng, Liu, Peng, and Teng [14], who studied the approximation of random walks by randomized algorithms running in nearly linear time. We emphasize that [14] gives a randomized algorithm with high space complexity (but low time complexity) for approximating properties of even-length walks, while we give a deterministic, space-efficient algorithm for approximating properties of walks of every length. Interestingly, while the graphs in our results are all undirected, some of our analyses use techniques for spectral approximation of directed graphs introduced by Cohen, Kelner, Peebles, Peng, Rao, Sidford, and Vladu [16,15].
The derandomized square can be viewed as applying the pseudorandom generator of Impagliazzo, Nisan, and Wigderson [23] to random walks on labelled graphs. It is somewhat surprising that repeated derandomized squaring does not blow up the error by a factor proportional to the length of the walk being derandomized. For arbitrary branching programs, the INW generator does incur error that is linear in the length of the program. Some special cases such as regular [11,12,17] and permutation [17,40] branching programs of constant width have been shown to have a milder error growth as a function of the walk length. Our work adds to this list by showing that properties of random walks of length k on undirected graphs can be estimated in terms of spectral approximation without error accumulating linearly in k.
In our previous paper [30], we showed that the Laplacian of the derandomized square of a regular graph spectrally approximates the Laplacian of the true square, I − M 2 , and this was used in a recursion from [33] to give a nearly logarithmic-space algorithm for approximately solving Laplacian systems Lx = b. A natural idea to approximate the Laplacian of higher powers, I − M r , is to repeatedly apply the derandomized square. This raises three challenges, and we achieve our result by showing how to overcome each: 1. It is not guaranteed from [30] that repeated derandomized squaring preserves spectral approximation. For this, we use ideas from [14] to argue that it does.
2. When r is not a power of 2, the standard approach would be to write where b i is the i-th bit of r and multiply approximations to M 2 i for all i such that b i = 0. The problem is that multiplying spectral approximations of matrices does not necessarily yield a spectral approximation of their product. Our solution is to generalize the derandomized square to produce sparse approximations to the product of distinct graphs. In particular, given I − M and an approximation I − M to I − M k , our derandomized product allows us to combine M and M to approximate I − M k+1 . Although our generalized graph product is defined for undirected graphs, its analysis uses machinery for spectral approximation of directed graphs, introduced in [15].
3. We cannot assume that our graph is regular without loss of generality. In contrast, [34,36,30] could do so, since adding self-loops does not affect connectivity or solutions to Laplacian systems of G, however, it does affect random walks. Our solution is to define and analyze the derandomized product for irregular graphs.
A key element in the derandomized product is a strongly explicit (i. e., neighbor relations can be computed in time polylog(N) and space O(log N)) construction of expander graphs whose sizes equal the degrees of the vertices in the graphs being multiplied. Many commonly used strongly explicit expander families only contain graph sizes that are certain subsets of N such as powers of 2 (Cayley graphs based on [32] and [5]), perfect squares [28,18], and other size distributions [35] or are only mildly explicit in the sense of having running time or parallel work that is poly(N) [26]. It was folklore that one can convert any sufficiently dense family of expanders (such as the aforementioned constructions) into a family of expanders of every size, and there were offhand remarks in the literature to this effect (see, e. g., [7], [19]), albeit without an analysis of expansion or explicitness. In Section 3, we give a self-contained description and proof of such a reduction, with an analysis in terms of both strong explicitness and spectral expansion. Subsequent to our work, Goldreich [20] has provided an analysis of the construction previously described in [19] in terms of vertex expansion and Alon [4] has given a stronger reduction that provides nearly Ramanujan expanders (i. e., ones with nearly optimal spectral expansion as a function of the degree) of every size N.
Many of our techniques are inspired by Cheng, Cheng, Liu, Peng, and Teng [14], who gave two algorithms for approximating random walks. One is a nearly linear time randomized algorithm for approximating random walks of even length and another works for all walk lengths r but has a running time that is quadratic in r, and so only yields a nearly linear time algorithm for r that is polylogarithmic in the size of the graph. In addition, [24] studied the problem of computing sparse spectral approximations of random walks but the running time in their work also has a quadratic dependence on r. We extend these results by giving a nearly linear time randomized algorithm for computing a spectral approximation to I − M r for all r. This is discussed in Section 5. THEORY OF COMPUTING, Volume 17 (4), 2021, pp. 1-35 Subsequent work. Subsequent to [31], the conference version of this paper, Ahmadinejad et al. [1] strengthened Theorem 1.2 by providing a deterministic algorithm that, when given a graph G, two vertices s and t, and a positive integer r, estimates the probability that a random walk of length r in G started at s ends at t to within ±ε and uses space O(log(r · N) · log log(r · N/ε)). Furthermore, their result applies to the more general setting of Eulerian digraphs. 2 In undirected graphs, all nonzero conductances can be shown to be of magnitude at least 1/poly(N) and can be expressed as a sum of at most N 2 /4 random walk probabilities (entries of powers of the transition matrix). Consequently, with the same space bound the algorithm of [1] can estimate the conductance of any set S to within a multiplicative factor of (1 ± 1/poly(N)). While the result of [1] is generally stronger than Theorem 1.2, with a better dependence on the approximation parameter ε, the algorithm and the analysis in our proof of Theorem 1.2 are considerably simpler than the corresponding items in [1]. The other results in our paper are not superseded by [1]. In particular, we believe that the derandomized graph product that we introduce to prove Theorem 1.2 is interesting in its own right. In addition, we fill a gap in the literature by extending the main result of [14] to compute sparse approximations of I − M r for odd r in nearly linear time, while their theorem works only for even r.

Spectral graph theory
Given an undirected multigraph G the Laplacian of G is the symmetric matrix D − A, where D is the diagonal matrix of vertex degrees and A is the adjacency matrix of G. The transition matrix of the random walk on G is T = AD −1 . T i j is the probability that a uniformly random edge from vertex j leads to vertex i (i. e., the number of edges between j and i divided by the degree of j). The normalized Laplacian of G is the symmetric matrix I − M = D −1/2 (D − A)D −1/2 . Note that when G is regular, D is a multiple of the identity and M and T are the same matrix: M = D −1/2 AD −1/2 = AD −1 = T . The transition matrix of the r-step random walk on G is T r . For all probability distributions π, T r π is the distribution over vertices that results from picking a random vertex according to π and then running a random walk on G for r steps. The transition matrix of the r-step random walk on G is related to the normalized Laplacian in the following way: For an undirected graph G with adjacency matrix A and a positive integer k, we write k · G to denote the graph with adjacency matrix k · A, i. e., the multigraph G with all edges duplicated to have multiplicity k. Given a symmetric matrix L, its Moore-Penrose Pseudoinverse, denoted L † , is the unique matrix with the same eigenvectors as L such that for each eigenvalue λ of L, the corresponding eigenvalue of L † is 1/λ if λ = 0 and 0 otherwise. When L is a Laplacian, we write L †/2 to denote the unique symmetric PSD matrix square root of the pseudoinverse of L.
To measure the approximation between graphs we use spectral approximation 3 [38]: Definition 2.1. Let L, L ∈ R n×n be symmetric PSD matrices. We say that L is an ε-approximation of L Note that Definition 2.1 is not symmetric in L and L. Spectral approximation can also be written in terms of the Loewner partial ordering of PSD matrices: where for two matrices A, B, we write A B if B − A is PSD. Spectral approximation has a number of useful properties listed in the following proposition.
Proposition 2.2. If W, X,Y, Z ∈ R n×n are PSD symmetric matrices then: 5. if W ≈ ε 1 X and Y ≈ ε 2 Z then W +Y ≈ max{ε 1 ,ε 2 } X + Z, and 6. if X ≈ ε Y then c · X ≈ ε c ·Y for all nonnegative scalars c Following [41,6,3], we measure how well-connected a graph is using the second eigenvalue. The smaller λ (G), the faster a random walk on G converges to the uniform distribution. Graphs G with λ (G) bounded away from 1 are called expanders. Expanders can equivalently be characterized as graphs that spectrally approximate the complete graph. This is formalized in the next lemma.
In [15] Cohen, Kelner, Peebles, Peng, Rao, Sidford, and Vladu introduced a definition of spectral approximation for asymmetric matrices. Although the results in our paper only concern undirected graphs, some of our proofs use machinery from the theory of directed spectral approximation.
Definition 2.5 (Directed Laplacian [16,15] . The associated directed graph has n vertices and an edge (i, j) of weight −L ji for all i = j ∈ [n] with L ji = 0. Definition 2.6 (Asymmetric Matrix Approximation [15]). Let L and L be (possibly asymmetric) matrices such that U = (L + L T )/2 is PSD. We say that L is a directed ε-approximation of L if: Below we state some useful lemmas about directed spectral approximation. The first gives an equivalent formulation of Definition 2.6.   For the "if" direction, suppose L ≈ ε L. Equivalently, Multiplying on the left and right by L †/2 preserves the ordering and yields For each nonzero eigenvalue λ of L, L †/2 has corresponding eigenvalue 1/ √ λ and hence L †/2 LL †/2 ≤ 1. Since L is symmetric we have U := (L + L T )/2 = L. Combining this with the above gives Since L ≈ ε L by assumption, we must have ker(L) = ker( L).

Space-bounded computation
We use a standard model of space-bounded computation where the machine M has a read-only input tape, a constant number of read/write work tapes, and a write-only output tape. If throughout every computation on inputs of length at most n, M uses at most s(n) total tape cells on all the work tapes, we say M runs in space s = s(n). Note that M may write more than s cells (in fact as many as 2 O(s) ) but the output tape is write-only. The following proposition describes the behavior of space complexity when space bounded algorithms are composed. Proposition 2.10. Let f 1 , f 2 be functions that can be computed in space s 1 (n), s 2 (n) ≥ log n, respectively, and f 1 has output of length at most 1 (n) on inputs of length n. Then f 2 • f 1 can be computed in space O(s 2 ( 1 (n)) + s 1 (n)).
A proof of Proposition 2.10 can be found in [8]

Rotation maps
In the space-bounded setting, it is convenient to use local descriptions of graphs. Such descriptions allow us to navigate large graphs without loading them entirely into memory. For this we use rotation maps, functions that describe graphs through their neighbor relations. Rotation maps are defined for graphs with labeled edges as described in the following definition. 2. for every vertex v ∈ V , the labels of the d v edges incident to v are distinct.
In [36], two-way labelings are referred to as undirected two-way labelings. Note that every graph has a two-way labeling where each vertex "names" its neighbors uniquely in some canonical way based on the order they're represented in the input. We will describe multigraphs with two-way labelings using rotation maps: 35]). Let G be an undirected multigraph on n vertices with a two-way labeling. The rotation map Rot G is defined as follows: Rot G (v, i) = (w, j) if the i-th edge to vertex v leads to vertex w and this edge is the j-th edge incident to w.
We will use expanders that have efficiently computable rotation maps. We call such graphs strongly explicit. The usual definition of strong explicitness only refers to time complexity, but we will use it for both time and space. Definition 2.13. A family of two-way labeled graphs G = {G n,c } (n,c) , where G n,c is a c-regular graph on n vertices, is called strongly explicit if given n, c, a vertex v ∈ [n] and an edge label a ∈ [c], Rot G n,c (v, a) can be computed in time poly(log(nc)) and space O(log nc).

The derandomized product and expanders of all sizes
In this section we introduce our derandomized graph product. The derandomized product generalizes the derandomized square graph operation that was introduced by Rozenman and Vadhan [36] to give an alternative proof that UNDIRECTED S-T CONNECTIVITY is in L. Unlike the derandomized square, the derandomized product is defined for irregular graphs and produces a sparse approximation to the product of any two (potentially different) graphs with the same vertex degrees.
Here, by the "product" of two graphs G 0 , G 1 , we mean the reversible Markov chain with transitions defined as follows: from a vertex v, with probability 1/2 take a random step on G 0 followed by a random step on G 1 and with probability 1/2 take a random step on G 1 followed by a random step on G 0 .
When G 0 = G 1 = G, this is the same as taking a 2-step random walk on G. Note, however, that when G is irregular, a 2-step random walk is not equivalent to doing a 1-step random walk on the graph G 2 whose edges correspond to paths of length 2 in G. Indeed, even the stationary distribution of the random THEORY OF COMPUTING, Volume 17 (4), 2021, pp. 1-35 walk on G 2 may be different than on G. 4 If A is the adjacency matrix of G, then G 2 has adjacency matrix A 2 and thus transition matrix A 2 D −1 2 , where D 2 is a diagonal matrix whose v'th entry is the sum of the degrees of the neighbors of vertex v in G. In contrast, the two-step random walk on G has transition matrix T 2 = (AD −1 ) 2 , where D is the diagonal matrix of degrees in G. Our goal in the derandomized product is to produce a relatively sparse graph whose 1-step random walk approximates the 2-step random walk on G.
The intuition behind the derandomized product is as follows: rather than build a graph with every such two-step walk, we use expander graphs to pick a pseudorandom subset of the walks. Specifically, we first pick b ∈ {0, 1} at random. Then, as before we take a truly random step from v to u in G b . But for the second step, we don't use an arbitrary edge leaving u in Gb, but rather correlate it to the edge on which we arrived at u using a c-regular expander on deg(u) vertices, where we assume that the vertex degrees in G 0 and G 1 are the same. When c < deg(u), the vertex degrees of the resulting two-step graph will be sparser than without derandomization. However using the pseudorandom properties of expander graphs, we can argue that the derandomized product is a good approximation of the true product. The combinatorial description of the derandomized product is as follows. For every vertex u ∈ [n], we place a bipartite expander between the deg(u) neighbors of u in G 0 and the deg(u) neighbors of u in G 1 .
Definition 3.1 (Derandomized Product). Let G 0 , G 1 be undirected multigraphs on n vertices with twoway labelings and identical vertex degrees d 1 , d 2 , . . . , d n . Let H = {H i } be a family of two-way labeled, c-regular expanders of sizes including d 1 , . . . , d n . The derandomized product with respect to H, denoted G 0 p H G 1 , is an undirected multigraph on n vertices with vertex degrees 2 · c · d 1 , . . . , 2 · c · d n and rotation map Rot G 0 p H G 1 defined as follows: whereb denotes the bit-negation of b.
Note that when G 0 = G 1 the derandomized product generalizes the derandomized square [36] to irregular graphs, albeit with each edge duplicated twice. To see that Note that Definition 3.1 requires that each vertex i has the same degree d i in G 0 and G 1 , ensuring that the random walks on G 0 , G 1 , and G 0 p H G 1 all have the same stationary distribution. This can be generalized to the case that there is an integer k such that for each vertex v with degree d v in G 1 , v has degree k · d v in G 0 . For this, we can duplicate each edge in G 1 k times to match the degrees of G 0 and then apply the derandomized product to the result. In such cases we abuse notation and write G 0 p H G 1 to mean G 0 p H k · G 1 .
In [30] we showed that the derandomized square produces a spectral approximation to the true square. We now show that the derandomized product also spectrally approximates a natural graph product.
Proof of Theorem 3.2. Note that k · G 1 has the same transition matrix and normalized Laplacian as G 1 . So we can replace G 1 with k · G 1 and assume k = 1 without loss of generality.
Since G 0 and G 1 have the same vertex degrees, we can we write where T 0 and T 1 are the transition matrices of G 0 and G 1 , respectively. Following the proofs in [36] and [30], we can write the transition matrix for the random walk on where each matrix corresponds to a step in the definition of the derandomized product. The two terms correspond to b = 0 and b = 1 in the derandomized product and, settingd = ∑ i∈[n] d i : • Q is ad × n matrix that "lifts" a probability distribution over • R 0 and R 1 are thed ×d symmetric permutation matrices corresponding to the rotation maps of G 0 and G 1 , respectively. That is, entry • B is ad ×d symmetric block-diagonal matrix with n blocks where block i is the transition matrix for the random walk on H d i ∈ H, the expander in our family with d i vertices.
• P = DQ T is the n ×d matrix that maps anyd-vector to an n-vector by summing all the entries corresponding to edges incident to the same vertex in G 0 and G 1 . This corresponds to projecting a distribution on Likewise, we can write where J is ad ×d symmetric block-diagonal matrix with n blocks where block i is J i , the transition matrix for the complete graph on d i vertices with a self-loop on every vertex. That is, every entry of J i is 1/d i .
THEORY OF COMPUTING, Volume 17 (4), 2021, pp. 1-35 We will show that From this the theorem follows by multiplying by D −1/2 P on the left and (D −1/2 P) T = QD 1/2 on the right and applying Proposition 2.2 Part 3. Since D −1/2 PQD 1/2 = I n , the left-hand side becomes . By Lemma 2.4, each graph in H is a λ -approximation of the complete graph on the same number of vertices. It follows that Id − B ≈ λ Id − J because the quadratic form of a block diagonal matrix equals the sum of the quadratic forms of its blocks. By Lemma 2.9 and the fact that The first inequality uses Lemma 2.7. We can add the absolute values on the left-hand side since the right-hand side is always nonnegative (I d − J is PSD) and invariant to swapping x with −x. The second inequality follows from the fact that J is PSD and so Fix v ∈ Rd and set x = R 0 v and y = R 1 v. Recall that R 0 and R 1 are symmetric permutation matrices and hence R 2 0 = R 2 1 = Id. Also note that for all square matrices A and vectors x, x T Ax = x T (A + A T )x/2. Combining these observations with the above gives Rearranging the above shows that which proves the theorem.
To apply the derandomized product, we need an expander family H with sizes equal to all of the vertex degrees. However, existing constructions of strongly explicit expander families only give graphs of sizes that are subsets of N such as all powers of 2 or all perfect squares. In [36,30] this was handled by adding self-loops to make the vertex degrees all equal and matching the sizes of expanders in explicit families. Adding self-loops was acceptable in those articles because it does not affect connectivity (the focus of [36]) or the Laplacian (the focus of [30]). However it does affect long random walks (our focus), so we cannot add self-loops. Instead, we show how to obtain strongly explicit expanders of all sizes. Our construction works by starting with a strongly explicit expander from one of the existing constructions and merging vertices to achieve any desired size: There exists a family of strongly explicit expanders H such that for all n > 1 and λ ∈ (0, 1) there is a c = poly(1/λ ) and a c-regular graph H n,c ∈ H on n vertices with λ (H n,c ) ≤ λ .
Proof. Let H be a c -regular expander onn vertices such that n ≤n ≤ 2n, c is a constant independent of n and λ (H ) ≤ λ < 1/4. H can be constructed using already known strongly explicit constructions such as [18,35] followed by squaring the graph a constant number of times to achieve λ < 1/4. We will construct H as follows: Pair off the first (n − n) vertices with the last (n − n) vertices in H and merge each pair into a single vertex (which will then have degree 2 · c ). To make the graph regular, add c self-loops to all of the unpaired vertices. More precisely, given u ∈ [n] and i ∈ [c] = [2 · c ] we compute Rot H (u , i ) as follows: Note that the righthand side is the normalized Laplacian of the graph U that results from starting with the complete graph onn vertices, merging the same pairs of vertices that are merged in H and addingn self-loops to all of the unmerged vertices for regularity. We finish the proof by showing that λ (U) ≤ 1/2 and thus H is a (λ + 1/2 + λ /2)-approximation of the complete graph by Proposition 2.2 Part 2 and Lemma 2.4. Recalling that λ < 1/4 completes the proof.
U has at leastn edges between every pair of vertices so we can write its transition matrix T u as where Jn is the transition matrix of the complete graph onn vertices with self-loops on every vertex and E is the transition matrix for ann-regular multigraph. Since E is the transition matrix of a regular graph, we have E ≤ 1 and thus which completes the proof.

Main result
In this section we prove Theorem 1.4, our main result regarding space bounded computation of the normalized Laplacian of the r-step random walk on G.
Theorem 1.4 (restated). There is a deterministic algorithm that given an undirected multigraph G on n vertices with normalized Laplacian I − M, a nonnegative integer r, and ε > 0, constructs an undirected multigraph G whose normalized Laplacian L is an ε-spectral approximation of L = I − M r . That is, for The algorithm runs in space where N is the bit length of the input graph G.
The algorithm described below is inspired by techniques used in [14] to approximate random walks with a randomized algorithm in nearly linear time. Our analyses use ideas from the work of Cohen, Kelner, Peebles, Peng, Rao, Sidford, and Vladu on directed Laplacian system solvers even though all of the graphs we work with are undirected.

Algorithm description and proof overview
Let I − M be the normalized Laplacian of our input and r be the target power. We will first describe an algorithm for computing I − M r without regard for space complexity and then convert it into a spaceefficient approximation algorithm. The algorithm iteratively approximates larger and larger powers of M. On a given iteration, we will have computed I − M k for some k < r and we use the following operations to increase k: Interleaving these two operations appropriately can produce any power r of M, invoking each operation at most log 2 r times. To see this, let b z b z−1 . . . b 0 be the bits of r in its binary representation where b 0 is the least significant bit and b z = 1 is the most significant. We are given I − M = I − M b z . The algorithm will have z iterations and each one will add one more bit from most significant to least significant to the binary representation of the exponent. So after iteration i we will have For iterations 1, . . . , z, we read the bits of r from b z−1 to b 0 one at a time. On each iteration we start with some power I − M k . If the corresponding bit is a 0, we square to create I − M 2k (which adds a 0 to the binary representation of the current exponent) and proceed to the next iteration. If the corresponding bit is a 1, we square and then invoke a plus one operation to produce I − M 2k+1 (which adds a 1 to the binary representation of the current exponent). After iteration z we will have I − M r .
Implemented recursively, this algorithm has log 2 r levels of recursion and uses O(log N) space at each level for the matrix multiplications, where N is the bit length of the input graph. This results in total space O(log r · log N), which is more than we want to use (cf. Theorem 1.4). We reduce the space complexity by replacing each square and plus one operation with the corresponding derandomized product, discussed in Section 3.
Theorem 3.2 says that the derandomized product produces spectral approximations to the square and the plus one operation. Since we apply these operations repeatedly on successive approximations, we need to maintain our ultimate approximation to a power of I − M. In other words, we need to show that given G such that we have: We prove these in Lemma 4.1 and Lemma 4.5. The transitive property of spectral approximation (Proposition 2.2 Part 2) will then complete the proof of spectral approximation.
We only know how to prove items 1 and 2 when M k is PSD. This is problematic because M is not guaranteed to be PSD for arbitrary graphs and so M k may only be PSD when k is even. Simple solutions like adding self-loops (to make the random walk lazy) are not available to us because loops may affect the random walk behavior in unpredictable ways. Another attempt would be to replace the plus one operation in the algorithm with a "plus two" operation • Plus two: Interleaving the square and plus two would preserve the positive semidefiniteness of the matrix we're approximating and can produce any even power of M. If r is odd, we could finish with one plus one operation, which will produce a spectral approximation because I − M r−1 is PSD. A problem with this approach is that the derandomized product is defined only for unweighted multigraphs and M 2 may not correspond to an unweighted multigraph when G is irregular. (When G is regular, the graph G 2 consisting of paths of length 2 in G does have normalized Laplacian I − M 2 .) For this reason we begin the algorithm by constructing an unweighted multigraph G 0 whose normalized Laplacian I − M 0 approximates I − M 2 and where M 0 is PSD. We can then approximate any power I − M r 0 using the square and plus one operation and hence can approximate I − M r for any even r (see Lemma 4.6). For odd powers, we again can finish with a single plus one operation.
Our main algorithm is presented below. Our input is an undirected two-way labeled multigraph G with normalized Laplacian I − M, ε ∈ (0, 1), and r = b z b z−1 . . . b 1 b 0 .
Note that each derandomized product multiplies every vertex degree by a factor of 2 · c. So the degrees of G, G 0 , . . . , G z are all proportional to one another and the derandomized products in Algorithm 4.1 are well-defined.

Proof of main result
In this section we prove Theorem 1.4 by showing that Algorithm 4.1 yields a spectral approximation of our target power I − M r and can be implemented space-efficiently. First we show that our two operations, square and plus one, preserve spectral approximation. Lemma 4.1 (Adapted from [14]). Let N and N be symmetric matrices such that I − N ≈ ε I − N and N is PSD, then I − N 2 ≈ ε I − N 2 .
The proof of Lemma 4.1 is adapted from Cheng, Cheng, Liu, Peng, and Teng [14], which uses ideas from the work of Miller and Peng [29]. We present these arguments here for the sake of completeness. We begin with three claims.
Proof. We follow the proof of Lemma 4.4 in [14]. For all vectors x, y and symmetric matrices U, The claim follows by taking U = N and U = N and our assumptions that I − N ≈ ε I − N and I + N ≈ ε I + N. Proof. This claim follows from Lemma B.2 in Miller and Peng [29], where it is stated in terms of the Schur complement. We modify the argument here without appealing to the Schur complement.
x y The first inequality follows from Cauchy-Schwarz and is tight if and only if y = c · Nx for some scalar c.
The second inequality follows from the fact that z 2 − 2 · Nx · z is minimized at z = Nx . So equality is achieved when y = Nx. Similarly: Next we show that the plus one operation in our algorithm also preserves spectral approximation.
Lemma 4.5. Let N, N 1 , and N 2 be symmetric matrices with spectral norm at most 1 and suppose that N 1 is PSD and commutes with N 2 . If I − N ≈ ε I − N 1 then Even though I − (1/2) · ( NN 2 + N 2 N) and I − N 2 N 1 are both symmetric matrices (since N 1 and N 2 commute), the proof of Lemma 4.5 uses machinery from the theory of spectral approximation for asymmetric matrices. This is because the proof first shows that I − N 2 N is an ε-approximation of I − N 2 N 1 in the asymmetric sense (Definition 2.6). The asymmetric definition is necessary for this claim because I − N 2 N can be an asymmetric matrix, so the usual notion of spectral approximation does not apply. We are not aware of any proofs of Lemma 4.5 that circumvent this application of asymmetric definitions.
Proof. Since I − N ≈ ε I − N 1 and I − N 1 is PSD (since N 1 ≤ 1), Lemma 2.7 and Lemma 2.9 say that for all x, y ∈ R n , It follows (replacing x with N 2 x) that for all vectors x, y ∈ R n , We now claim that N 2 (I − N 1 )N 2 I − N 1 I − N 2 N 1 . If the claim is true then the above implies that for all vectors x, y ∈ R n which by Lemma 2.7 implies Since for all matrices U it is the case that U +U T /2 ≤ U /2 + U T /2 = U we have Since these matrices are symmetric (N 2 and N 1 commute), Lemma 2.9 says that the approximation also holds in the undirected sense.
Setting N 1 = M k and N 2 = M in Lemma 4.5 shows that the plus one operation preserves spectral approximation whenever M k is PSD. Recall that the first step in Algorithm 4.1 is to construct a graph G 0 with normalized Laplacian I − M 0 such that M 0 is PSD and I − M 0 approximates I − M 2 . We can then approximate I − M k 0 for any k using squaring and plus one because M k 0 will always be PSD. The following Lemma says that I − M k 0 spectrally approximates I − M 2k .
If b i = 0 then 2 · r i−1 = r i so I − A r i ≈ 2·(i−1)·ε I − B r i . If b i = 1, then by applying Lemma 4.5 we have Applying Lemma 4.5 again (recalling that I − B ≈ ε I − A and A is PSD) gives

By Proposition 2.2 Part 1, this implies
Putting this together with Proposition 2.2 Part 2 we have that where the first inequality follows from our assumption that ε ≤ 1/(2 · (r)). So I − A r i ≈ 2·i·ε I − B r i , completing the inductive step. This shows that when i = (r), I − A r 2· (r)·ε I − B r , as desired.
Now we can prove Theorem 1.4. We prove the theorem with three lemmas: Lemma 4.7 shows how to construct the graph G 0 needed in Algorithm 4.1, Lemma 4.8 argues that the algorithm produces a spectral approximation to I − M r , and Lemma 4.9 shows that the algorithm can be implemented in space O(log N + (log r) · log(1/ε) + (log r) · log log r).

Building
There is an algorithm that takes an undirected, unweighted multigraph G with normalized Laplacian I − M and a parameter ε > 0, and outputs a rotation map Rot G 0 for an undirected, unweighted multigraph G 0 with a two-way labeling and normalized Laplacian I − M 0 such that: Proof. Let δ = 1/ 4/ε and t = 1/δ , an integer. Let H be a family of c-regular expanders of every size from Theorem 3.3, such that for every H ∈ H, λ (H) ≤ δ (and hence c = poly(1/δ )).
We construct G 0 as follows: duplicate every edge of G to have multiplicity t and then for each vertex v, add d v self-loops. So for each vertex v in G 0 , v has degree (t + 1) · 2 · c · d v and hence G 0 has the same stationary distribution as G. Note that we can write M 0 = (t · M + I)/(t + 1).
First we show that M 0 is PSD. From Theorem 3.2, we have Observe that since I − M ≈ δ I − M 2 , we also have We can construct a two-way labeling of G in space O(log N) by arbitrarily numbering the edges incident to each vertex. Computing Rot G involves computing Rot G twice and the rotation map of an expander in H once. For a given vertex degree d in G, Rot H d can be computed in space O(log(d · c)) = O(log N + log(1/ε)). Duplicating the edges and adding self-loops for Rot G 0 adds at most O(log N + log(1/ε)) overhead for a total of O(log N + log(1/ε)) space. Proof. Let b z b z−1 . . . b 1 b 0 be the binary representation of r. Recall that for the derandomized products in our algorithm we use a family of c-regular expanders H from Theorem 3.3 such that for every H ∈ H, λ (H) ≤ µ = ε/(32 · z) (and hence c = poly(1/µ) = poly((log r)/ε)).

Proof of spectral approximation
We construct G 0 with normalized Laplacian I − M 0 as in Lemma 4.7 such that M 0 is PSD and I − M 0 ≈ ε/(16·z) I − M 2 . By Proposition 2.2 Part 1, and the fact that For each i ∈ {0, . . . z} let r i be the integer with binary representation b z b z−1 . . . b z−i and let I − M i be the normalized Laplacian of G i . We will prove by induction on i that G i is a (4 · µ · i)-approximation to I − M r i 0 . Thus, G z−1 is a 4 · µ · (z − 1) ≤ ε/8-approximation to I − M r z−1 0 . The base case is trivial since r 0 = 1. For the induction step, suppose that where we used the fact that µ < 1/(32 · (i − 1)).
Let I − M ds be the normalized Laplacian of G i−1 p H G i−1 . By the analysis above, I − M ds is a (µ + 4 · µ · (i − 1) + 4 · µ 2 · (i − 1))-approximation of I − M 2·r i−1 0 . By Theorem 3.2 and Lemma 4.5 we have Applying Proposition 2.2 Part 2 and noting that µ ≤ 1/(32 · (i − 1)) we get By Proposition 2.2 Part 2, and the fact that ε ≤ 1, this gives If b 0 = 0 then 2 · r z−1 = r and we are done. If b 0 = 1 then we apply one more plus one operation using our original graph G to form G z = G z−1 p H G such that Applying Proposition 2.2 Part 2 then gives I − M z ≈ ε I − M r .

Analysis of space complexity
Lemma 4.9. Algorithm 4.1 can be implemented so that given an undirected multigraph G, a positive integer r, and ε ∈ (0, 1), it computes its output G z in space O(log N + (log r)·log(1/ε) +(log r)·log log r), where N is the bit length of the input graph G.
Proof. We show how to compute Rot G z in space O(log N + (log r) · log(1/ε) + (log r) · log log r). We copy the bits of r into memory and expand them into τ bits as follows: for i ∈ {1, . . . z − 1} if b z−i = 0, record a 0 (corresponding to a derandomized square) and if b z−i = 1, record a 0 followed by a 1 (corresponding to a derandomized square followed by a plus one operation). Finish by just recording b z at the end. Now we have τ bits t 1 , . . . ,t τ in memory where for i < τ, t i = 0 if the i-th derandomized product in our algorithm is a derandomized square and t i = 1 if the i-th derandomized product is a plus one with the graph G 0 . If t τ = 0, we do no derandomized product on the last iteration and if t τ = 1 we apply the plus one operation using G instead of G 0 as described in the algorithm.
We also re-number our graphs to be G 1 , . . . , G τ where G i is the graph produced by following the derandomized products corresponding to t 1 , . . . ,t i . For each i ∈ [τ] and v ∈ [n], vertex v in graph G i has degree (2 · c) i · d v because each derandomized product multiplies every vertex degree by a factor of 2 · c.
Since our graphs can be irregular, the input to a rotation map may have a different length than its output. To simplify the space complexity analysis, when calling a rotation map, we will pad the edge labels to always have the same length as inputs and outputs to the rotation map. For each graph G i , we pad its edge labels to have length i = log 2 d max + i · log 2 (2 · c) .
Sublogarithmic-space complexity can depend on the model, so we will be explicit about the model we use. We compute the rotation map of each graph G i on a multi-tape Turing machine with the following input/output conventions: • Input Description: -Tape 1 (read-only): Contains the input G, r, and ε with the head at the leftmost position of the tape.
-Tape 2 (read-write): Contains the input to the rotation map is a vertex of G i , and k 0 is the label of an edge incident to v 0 padded to have total length i . The tapehead is at the rightmost end of k 0 . The rest of the tape may contain additional data.
-Tape 3 (read-write): Contains the bits t 1 , . . . ,t τ with the head pointing at t i .
• Output Description: -Tape 1: The head should be returned to the leftmost position. - , and k 2 is padded to have total length i . The head should be at the rightmost position of k 2 and the rest of the tape should remain unchanged from its state at the beginning of the computation.
-Tape 3: Contains the bits t 1 , . . . ,t τ with the head pointing at t i .
-Tapes 4+: (read-write): Are returned to the blank state with the heads at the leftmost position.
Let Space(G i ) be the space used on tapes other than tape 1 to compute Rot G i . We will show that Space(G i ) = Space(G i−1 ) + O(log c). Recalling that Space(G 0 ) = O(log N + log(1/ε) + log log r) and unraveling the recursion gives Space(G z ) = O(log N + log(1/ε) + log log r + τ · log c) = O(log N + log(1/ε) + log log r + log r · log(poly(log r)/ε)) = O(log N + (log r) · log(1/ε) + (log r) · log log r) as desired. Below we prove the recurrence on Space(G i ) by low level reasoning about rotation maps. While similar space complexity analysis has appeared before (see [34,36,30]), we first provide some intuition for how the space-efficient implementation of our algorithm works. Our ultimate goal is, given a vertex v and edge label e in G z , compute the e-th neighbor of v in G z . If we can achieve this, we can compute the entire graph G z by enumerating over vertices and edges. From the way we build G z out of the derandomized product, neighbor relations in G z are defined by neighbor relations in G z−1 as well as neighbor relations in an expander graph (in accordance with Definition 3.1). Likewise, neighbor relations in G z−1 are defined by neighbor relations in G z−2 as well as neighbor relations in an expander graph and so on. So we can recursively compute neighbors in graphs with larger subscripts by reducing those computations to neighbors in graphs with smaller subscripts along with computations on expander graphs.
A naive implementation of this recursion would use O(log n) space for each level of the O(log n) levels of recursion, for a total of O(log 2 n) space. The key advantage to using rotation maps and the structure inherent to the derandomized product, is that the space used to compute neighbors in the G i in the algorithm can be reused. If we can compute neighbors in G i−1 , the only additional space needed to compute neighbors in G i is the space to store an edge label from a graph in our expander family. This reasoning leads to the recurrence: Space To see how this operates at a low level with rotation maps, we begin with (v 0 , k 0 ) on Tape 2 (possibly with additional data) and the tapehead at the far right of k 0 . We parse k 0 into Note that G i = G i−1 p H G where for i = τ, we have G = G i−1 if t i−1 = 0 and G = G 0 when t i−1 = 1. We compute Rot G i according to Definition 3.1. We move the head left to the rightmost position of j 0 . If b = 0, we move the third tapehead to t i−1 and recursively compute Rot G i−1 (v 0 , j 0 ) so that Tape 2 now contains (v 1 , j 1 , a 0 , b) (with j 1 padded to have the same length as j 0 ). The vertex v 1 in the graph G i−1 has degree d = (2 · c) i−1 · d v 1 so we next compute Rot H d ( j 1 , a 0 ) so that (v 1 , j 2 , a 1 , b) is on the tape. Finally we compute Rot G (v 1 , j 2 ) and flip b to finish with (v 2 , j 3 , a 1 ,b) on the second tape. We then move the third tapehead to t i . If b = 1 then we just swap the roles of G i−1 and G above.
So computing Rot G i involves computing the rotation maps of G i−1 , H d , and G each once. Note that each of the rotation map evaluations occur in succession and can therefore reuse the same space. Clearly Space(G ) ≤ Space(G i−1 ) because either G = G i−1 or G is either G 0 or G, both of whose rotation maps are subroutines in computing Rot G i−1 . Computing Rot H d adds an overhead of at most O(log c) space to store the additional edge label a 0 and the bit b. So we can compute the rotation map of G τ in space O(log N + (log r) · log(1/ε) + (log r) · log log r).

Random walks
Our algorithm immediately implies Theorem 1.2, which we prove below.
There is a deterministic algorithm that given an undirected multigraph G on n vertices, a positive integer r, a set of vertices S, and ε > 0, computes a number Φ such that and runs in space O(log N + (log r) · log(1/ε) + (log r) · log log r), where N is the bit length of the input graph G (the length of the description of G as a list of edges, in a binary encoding).
where the penultimate equality follows from the fact that De S /d S is the probability distribution over vertices in S where each vertex has mass proportional to its degree, i. e., the probability distribution V 0 (V 0 ∈ S). Multiplying this distribution by T r gives the distribution of V r (V 0 ∈ S). Multiplying this resulting distribution on the left by e T S , sums up the probabilities over vertices in S, which gives the probability that our random walk ends in S.
From Theorem 1.4, we can compute a matrix L such that L ≈ ε I − M r in space O(log N + (log r) · log(1/ε) + (log r) · log log r). It follows from Proposition 2.2, Part 6 and the definition of spectral approximation that Our algorithm also implies an algorithm for approximating random walk matrix polynomials.
Definition 5.1 (Random walk matrix polynomials). Let G be an undirected multigraph with Laplacian D−A and let α be a vector of nonnegative scalars α = (α 1 , . . . , α ) such that ∑ i∈[ ] α i = 1. The normalized -degree random walk matrix polynomial of G with respect to α is defined to be: Random walk matrix polynomials are Laplacian matrices that arise in algorithms for approximating q-th roots of symmetric diagonally dominant matrices and efficient sampling from Gaussian graphical models [27,14,13].
Proof. For each r ∈ [ ], let L r be an ε-approximation of I − M r . Applying Proposition 2.2, Parts 5 and 6 gives Our algorithm can compute each L r in space O(log N + (log r) · log(1/ε) + (log r) · log log r). Multiplication and iterated addition can both be computed in O(log N) space where N is the input bit length [10,21]. So by the composition of space bounded algorithms (Proposition 2.10), L α (G) can be approximated in space O(log N + (log r) · log(1/ε) + (log r) · log log r).

Odd-length walks in nearly linear time
Our approach to approximating odd-length walks deterministically and space-efficiently also leads to a new result in the context of nearly linear-time (randomized) spectral sparsification algorithms. Specifically, we extend the following Theorem of Cheng, Cheng, Liu, Peng, and Teng [14].

Theorem 5.3 ([14]
). There is a randomized algorithm that given an undirected weighted graph G with n vertices, m edges, and normalized Laplacian I − M, even integer r, and ε > 0 constructs an undirected weighted graph G with normalized Laplacian L containing O(n log n/ε 2 ) non-zero entries, in time O(m · log 3 n · log 5 r/ε 4 ), such that L ≈ ε I − M r with high probability.
Our approach to approximating odd-length walks can be used to extend Theorem 5.3 to odd r.
Corollary 5.4. There is a randomized algorithm that given an undirected weighted graph G with n vertices, m edges, and normalized Laplacian I − M, odd integer r, and ε > 0 constructs an undirected weighted graph G with normalized Laplacian L containing O(n log n/ε 2 ) non-zero entries, in time O(m · log 3 n · log 5 r/ε 4 ), such that L ≈ ε I − M r with high probability.
Our proof of Corollary 5.4 uses Theorem 5.3 as a black box. So in fact, given G with normalized Laplacian I − M and any graph G whose normalized Laplacian approximates I − M r for even r, we can produce an approximation to I − M r+1 in time nearly linear in the sparsities of G and G. To prove the corollary, we use the same method used in [33] and [15] for sparsifying two-step walks on undirected and directed graphs, respectively. The idea is that the graphs constructed from two-step walks can be decomposed into the union of product graphs: graphs whose adjacency matrices have the form xy T for vectors x, y ∈ R n . We use the following fact from [15] that says that product graphs can be sparsified in time that is nearly-linear in the number of non-zero entries of x and y rather than the number of non-zero entries in xy T , which may be much larger.
Lemma 5.5 (Adapted from [15] Lemma 3.18). Let x, y be non-negative vectors with x 1 = y 1 = r and let ε ∈ (0, 1). Furthermore, let s denote the total number of non-zero entries in x and y and let L = diag(y) − 1 r · xy T . Then there is an algorithm that in time O(s · log s/ε 2 ) computes a matrix L with O(s · log s/ε 2 ) non-zeros such that L is a directed ε-approximation of L with high probability.
After using Lemma 5.5 to sparsify each product graph in our decomposition, we then apply an additional round of graph sparsification. Lemma 5.6 ([25]). Given an undirected graph G with n vertices, m edges, and Laplacian L and ε > 0, there is an algorithm that computes a graph G with Laplacian L containing O(n · log n/ε 2 ) non-zero entries in time O(m · log 2 n/ε 2 ) such that L ≈ ε L with high probability. Our goal is to sparsify the lefthand side. Note that since I − M spectrally approximates I − M r−1 , the corresponding graphs must have the same stationary distribution and hence proportional vertex degrees. In other words there is a number k such that for all vertices v ∈ [n] we have deg G (v) = k · deg G (v). We will think of the graph that adds one step to our walk as k · G rather than G because k · G and G have the same degrees and the normalized Laplacian of k · G is the same as the normalized Laplacian of G.
Let A and A be the adjacency matrices of k · G and G, respectively and let D be the diagonal matrix of vertex degrees. Let Q = D − AD −1 A and note that Q is the Laplacian of a weighted directed graph. We will show how to compute a sparse directed approximation of Q and use this to show how to compute a sparse approximation to the lefthand side of Equation (5.1). Our approach is inspired by similar arguments from [33,15]. We decompose Q into n product graphs as follows. For each i ∈ [n] let where A i,: and A :,i denote the i-th row of A and the i-th column of A, respectively. Observe that Q i is a directed Laplacian of a bipartite graph between the neighbors of vertex i in k · G and the neighbors of i in G and that Q = ∑ i∈[n] Q i . Furthermore, each Q i is a product graph and hence can be sparsified using Lemma 5.5. Set x i = A :,i , y i = A i,: , r i = D i,i , and let s i be the total number of non-zero entries in x and y.
Note that x i 1 = y i 1 = r i because k · G and G have the same vertex degrees. By Lemma 5.5, for each i ∈ [n] we can compute a directed ε/8-approximation Q i of Q i containing O(s i · log s i /ε 2 ) entries in time O(s i · log s i /ε 2 ). Applying the lemma to each Q i yields Q = ∑ i∈[n] Q i , which contains O(m · log m/ε 2 ) non-zero entries and can be computed in time O(m · log m/ε 2 ) because ∑ i∈[n] s i = O(m). By Lemma 2.8 we have 1 2 · ( Q i + Q T i ) ≈ ε/8 for all i ∈ [n] with high probability. It follows from Proposition 2.2 Part 5 that with high probability. From Proposition 2.2 Part 3, we then get with high probability. Applying Lemma 5.6 we can re-sparsify the graph corresponding to D −1/2 1 2 · ( Q + Q T )D −1/2 to produce a graph G whose normalized Laplacian I − M has O(n · log n/ε 2 ) non-zero entries and I −M ≈ ε/8 D −1/2 1 2 ·( Q+ Q T )D −1/2 with high probability. This takes additional time O(m·log 2 n/ε 2 ) due to Theorem 1.1 of [25]. Applying Proposition 2.2 Part 2 twice we get that I − M ≈ ε I − M r and the total running time for the procedure was O(m · log 3 n · log 5 r/ε 4 ).

OMER REINGOLD is the Rajeev Motwani Professor in Computer Science at Stanford
University. His research is in the foundations of computer science and most notably in computational complexity, cryptography, and the societal impact of computation. Omer