Adapt or Die: Polynomial Lower Bounds for Non-Adaptive Dynamic Data Structures

In this paper, we study the role non-adaptivity plays in maintaining dynamic data structures. Roughly speaking, a data structure is non-adaptive if the memory locations it reads and/or writes when processing a query or update depend only on the query or update and not on the contents of previously read cells. We study such non-adaptive data structures in the cell probe model. This model is one of the least restrictive lower bound models and in particular, cell probe lower bounds apply to data structures developed in the popular word-RAM model. Unfortunately, this generality comes at a high cost: the highest lower bound proved for any data structure problem is only polylogarithmic. Our main result is to demonstrate that one can in fact obtain polynomial cell probe lower bounds for non-adaptive data structures. To shed more light on the seemingly inherent polylogarithmic lower bound barrier, we study several different notions of non-adaptivity and identify key properties that must be dealt with if we are to prove polynomial lower bounds without restrictions on the data structures. Finally, our results also unveil an interesting connection between data structures and depth-2 circuits. This allows us to translate conjectured hard data structure problems into good candidates for high circuit lower bounds; in particular, in the area of linear circuits for linear operators. Building on lower bound proofs for data structures in slightly more restrictive models, we also present a number of properties of linear operators which we believe are worth investigating in the realm of circuit lower bounds.


Introduction
Proving lower bounds on the performance of data structures has been an important line of research for decades. Over time, numerous computational models have been proposed, of which the cell probe model of Yao [21] is the least restrictive. Lower bounds proved in this model apply to essentially any imaginable data structure, including those developed in the most popular upper bound model, the word-RAM. Much effort has therefore been spent on deriving cell probe lower bounds for natural data structure problems. Nevertheless, the highest lower bound that has been proved for any data structure problem remains just polylogarithmic.
In this paper, we consider a natural restriction of data structures, namely non-adaptivity. Roughly speaking, a non-adaptive data structure is a data structure for which the memory locations read when answering a query or processing an update depend only on the query or update itself, and not on the contents of the previously read memory locations. Surprisingly, we are able to derive polynomially high cell probe lower bounds for such data structures.

The Cell Probe Model
In the cell probe model, a data structure consists of a collection of memory cells, each storing w bits. Each cell has an integer address amongst [2 w ] = {1, . . . , 2 w }, i.e. we assume any cell has enough bits to address any other cell. When a data structure is presented with a query, the query algorithm starts reading, or probing, cells of the memory. The cell probed at each step may depend arbitrarily on the query and the contents of all cells probed so far. After probing a number of cells, the query algorithm terminates with the answer to the query.
A dynamic data structure in the cell probe model must also support updates. When presented with an update, the update algorithm similarly starts reading and/or writing cells of the data structures. We refer jointly to reading or writing a cell as probing the cell. The cell probed at each step, and the contents written to a cell at each step, may again depend arbitrarily on the update operation and the cells probed so far.
The query and update times of a cell probe data structure are defined as the number of cells probed when answering a query or update respectively. The space usage is simply defined as the largest address used by any cell of the data structure.

Previous Cell Probe Lower Bound Techniques
As mentioned, the state-of-the-art techniques for proving cell probe lower bounds unfortunately yield just polylogarithmic bounds. In the following, we give a brief overview of the highest lower bounds that has been achieved since the introduction of the model, and also the most promising line of attack towards polynomial lower bounds.
Static Data Structures. One of the most important early papers on cell probe lower bounds for static data structures is the paper of Miltersen et al. [15]. They demonstrated an elegant reduction to data structures from an assymmetric communication game. This connection allowed them to obtain lower bounds of the form t q = Ω(lg m/ lg S), where m denotes the number of queries to the data structure problem, S the space usage in number of cells and t q the query time. Note however that this bound is insensitive to polynomial changes in S and cannot give super-constant lower bounds for problems where the number of possible queries is just polynomial in the input size (which is true for most natural problems). This barrier was overcome in the seminal work of Pǎtraşcu and Thorup [19], who extended the communication game of Miltersen et al. [15] and obtained lower bounds of t q = Ω(lg m/ lg(St q /n)), which peaks at t q = Ω(lg m/ lg lg m) for data structures using npoly(lg m) space.
An alternative approach to static lower bounds was given by Panigrahy et al. [16]. Their method is based on sampling the cells of a data structure and showing that many queries can be answered from a small set of cells if the query time is too small (we note that similar ideas have been used for succinct data structure lower bounds, see e.g. [9]). The maximum lower bounds that can be obtained from this technique are of the form t q = Ω(lg m/ lg(S/n)), see [13]. For linear space, this reaches t q = Ω(lg m), which remains the highest static lower bound to date.
Dynamic Data Structures. The first technique for proving lower bounds on dynamic data structures was the chronogram technique of Fredman and Saks [7]. This technique gives lower bounds of the form t q = Ω(lg n/ lg(wt u )) and plays a fundamental role in all later techniques for proving dynamic data structure lower bounds. Pǎtraşcu and Demaine [18] extended the technique of Fredman and Saks with their information transfer technique. This extension allowed for lower bounds of max{t q , t u } = Ω(lg n). Very recently, Larsen [12] combined the chronogram technique of Fredman and Saks with the cell sampling method of Panigrahy et al. to obtain a lower bound of t q = Ω((lg n/ lg(wt u )) 2 ), which remains the highest lower bound achieved so far.
Conditional Lower Bounds. Examining all of the above results, we observe that no lower bound has yet exceeded max{t u , t q } = Ω((lg n/ lg lg n) 2 ) in the most natural case of polynomially many queries, i.e. m = poly(n). In an attempt to overcome this barrier, Pǎtraşcu [17] defined a dynamic version of a set disjointness problem, named the multiphase problem. We study problems that are closely related to the multiphase problem, so we summarize it here: The Multiphase Problem. This problem consists of three phases: • Phase I: In this phase, we receive k sets S 1 , . . . , S k , all subset of a universe [n]. We are allowed to preprocess these sets into a data structure using time O(τ kn).
• Phase II: We receive another set T ⊆ [n] and have time O(τ n) to read and update cells of the data structure constructed in Phase I.
• Phase III: We receive an index i ∈ [k] and have time O(τ ) to read cells of the data structure constructed during Phase I and II in order to determine whether S i ∩ T = ∅.
Pǎtraşcu conjectured that there exists constants µ > 1 and ε > 0 such that any solution for the multiphase problem must have τ = Ω(n ε ) when k = n µ , i.e. for the right relationship between n and k, any data structure must have either polynomial preprocessing time, update time or query time. Furthermore, he reduced the multiphase problem to a number of natural data structure problems, including e.g. the following problems.
• Reachability in Directed Graphs. In a preprocessing phase, we are given a directed graph with n nodes and m edges. We are then to support inserting directed edges into the graph. A query is finally specified by two nodes of the graph, u and v, and the goal is to determine whether there exists a directed path from u to v.
• Subgraph Connectivity. In a preprocessing phase, we are given an undirected graph with n nodes and m edges. We are then to turn nodes on and off. A query is finally specified by two nodes of the graph, u and v, and the goal is to determine whether there exists a path from u to v using only on nodes.
We also mention the following problem, which was shown in [2] to solve the multiphase problem. These reductions imply polynomial lower bounds for the above problems, if the multiphase problem has a polynomial lower bound. Thus it seems fair to say that studying the multiphase problem is the most promising direction for obtaining polynomial data structure lower bounds.

Non-Adaptivity
Given that we are generally clueless about how to prove polynomial lower bounds in the cell probe model, it is natural to investigate under which circumstances such bounds can be achieved.
In this paper, we study the performance of data structures that are non-adaptive. To make the notion of non-adaptivity precise, we define it in the following: • Non-Adaptive Query Algorithm. A cell probe data structure has a non-adaptive query algorithm, if the cells it probes when answering a query depend only on the query, and not on the contents of previously probed cells.
• Non-Adaptive Update Algorithm. Similarly, a cell probe data structure has a nonadaptive update algorithm, if the cells it probes when processing an update depend only on the update, and not on the contents of previously probed cells.
• Memoryless Update Algorithm. In this paper, we also study a slighlty more restrictive type of update algorithm. A cell probe data structure has a memoryless update algorithm, if the update algorithm is both non-adaptive, and furthermore, the contents written to a cell during an update depend only on the update and the current contents of the cell, i.e., they may not depend on the contents of other cells probed during the update operation. 1 • Linear Data Structures. Finally, we study a sub-class of the data structures with a memoryless update algorithm, which we refer to as linear data structures. These data structures are defined for problems where the input can be interpreted as an array A of n bits and an update operation can be interpreted as flipping a bit of A (from 0 to 1 or 1 to 0). A linear data structure has non-adaptive query and update algorithms. Furthermore, when processing an update, the contents of all probed cells are simply flipped, and on a query, the data structure returns the XOR of the bits stored in all the probed cells. Note that these data structures use only a word size of w = 1 bit, every cell stores a linear combination over the bits of A (mod 2) and a query again computes a linear combination over the stored linear combinations (mod 2).
While linear data structures might appear to be severly restrictive, for many data structure problems (particularly in the area of range searching), natural solutions are in fact linear. An example is the well-studied prefix sum problem, where the goal is to dynamically maintain an array A of bits under flip operations, and a query asks for the XOR of elements in a prefix range A[1 . . . k]. One-dimensional range trees are linear data structures that solve prefix sum with update and query time O(lg n). This is optimal when memory cells store only single bits [18], even for adaptive data structures. More elaborate problems in range searching would be: Given a fixed set P of n points in d-dimensional space, support deleting and re-inserting points of P while answering queries of the form "what is the parity of the number of points inside a given query range?". Here query ranges could be axis-aligned rectangles, halfspaces, simplices etc. We note that all the known data structures for range counting can easily be modified to yield linear data structures when given a fixed set of points P , and still, this setting seems to capture the hardness of range counting.
The main difference between non-adaptive and memoryless update algorithms is that nonadaptive update algorithms may move the information about an update operation around the data structure, even on later updates. As an example, consider a data structure with a nonadaptive update algorithm and two possible updates, say updates u 1 and u 2 . Even if the data structure only probes the first memory cell on update u 1 , information about u 1 can be stored many other places in the data structure. Imagine the data structure initially stores the value 0 in the first memory cell. Whenever update u 1 is performed, the data structure increments the contents of the first memory cell by one. On update u 2 , the data structure copies the contents of the first memory cell to the second memory cell. Clearly both operations are non-adaptive, and we observe that whenever we have performed update u 2 , the second memory cell stores the number of times update u 1 has been performed, even though u 1 never probes the cell. For memoryless updates, information about an update is only stored in cells that are actually probed when processing the update operation.
Linear data structures are inherently memoryless. However, some features possible with memoryless updates are not available to linear data structures. For example, memoryless update algorithms can support cells that maintain a count of the total number of updates executed. This is not possible with linear data structures, since the contents of each cell is a fixed linear combination of the data being stored.

Our Results
The main result of this paper, is to demonstrate that polynomial cell probe lower bounds can be achieved when we restrict data structures to be non-adaptive. In Section 2 we also prove lower bounds for data structures where only the query algorithm is non-adaptive. The concrete data structure problem that we study in this setting is the following indexing problem.
Indexing Problem. In a preprocessing phase, we receive a set of k binary strings S 1 , . . . , S k , each of length n. We are then to support updates, consisting of an index j ∈ [n], which we think of as an index into the strings S 1 , . . . , S k . A query is finally specified by an index i ∈ [k] and the goal is to return the j'th bit of S i . Theorem 1. Any cell probe data structure solving the indexing problem with a non-adaptive query algorithm must either have t q = Ω(n/w) or t u = Ω(k/w), regardless of the preprocessing time and space usage.
Examining this problem, one quickly observes that it is a special case of the multiphase problem presented in Section 1.2, thus by setting the parameters in the reductions of [17,2] correctly we obtain, amongst others, the following lower bounds as an immediate corollary of our lower bound for the indexing problem: Corollary 1. Any cell probe data structure that uses a non-adaptive query algorithm to solve (i) reachability in directed graphs or (ii) subgraph connectivity must either have t q = Ω(n/w) or t u = Ω(n/w). Any cell probe data structure that solves range mode with a non-adaptive query algorithm must have t q t u = Ω(n/w 2 ).
In Section 2, we prove lower bounds for data structures where the query algorithm is allowed to be adaptive, but the update algorithm is memoryless. Again, we prove our lower bound for a special case of the multiphase problem: Set Disjointness Problem. In a preprocessing phase, we receive a subset S of a universe [n]. We are then to support inserting elements x ∈ [n] into an initially empty set T . Finally a query simply asks to return whether S ∩ T = ∅, i.e. the problem has just one query.
Theorem 2. Any cell probe data structure solving the set disjointness problem with a memoryless update algorithm must have t q = Ω(n/w), regardless of the preprocessing time, space usage and update time.
Again, using the reductions of [17,2], we obtain the following lower bounds as a corollary of our lower bound for the set disjointness problem: Corollary 2. Any cell probe data structure that uses a memoryless update algorithm to solve (i) reachability in directed graphs, (ii) subgraph connectivity, or (iii) range mode must have t q = Ω(n/w).
Finally, in Section 3, we show a strong connection between nonadaptive data structures and the wire complexity of depth-2 circuits. In these circuits, gates have unbounded fan-in and fan-out and compute arbitrary functions. Thus, trivial bounds on the number of gates exist. Instead, the size of a circuit s(C) is defined to be the number of wires.
Proving lower bounds on the size of circuits computing explicit operators F : {0, 1} n → {0, 1} m has been studied in several works. In particular, Valiant [20] showed that an ω(n 2 /(lg lg n)) bound for circuits computing F implies that F cannot be computed by log-depth, linear size, bounded fan-in circuits. Currently, the best bounds known for an explicit operator are Ω(n 3/2 ). Cherukhin [6] gave such a bound for circuits computing cyclic convolutions. Jukna [10] gave a similar lower bound for circuits computing matrix multiplication, and developed a general technique for proving such lower bounds, formalizing the intuition in [6].
First, we show how to use simple encoding arguments common to data structure lower bounds to achieve circuit lower bounds, using matrix multiplication as an example. Our bound matches the result from [10], but yields a simpler argument. We discuss Jukna's technique in more detail in Section 3.
Depth-2 circuits computing explicit linear operators are of particular interest. Currently, the best lower bound for an explicit linear operator is the recent Θ(n(lg n/ lg lg n) 2 ) bound of Gál et al. [8] for circuits that compute error correcting codes. Another interesting question is whether general circuits are more powerful than linear circuits for computing linear operators. Linear circuits use only XOR gates; i.e., each gate outputs a linear combination in GF(2) over its inputs.
We show a generic connection between linear data structures and linear circuits. Define a problem P as a mapping

Lemma 1.
If there is a linear data structure for a problem P with query time of t q and update time t u , then there exists a depth-2 linear circuit C computing F P with size s(C) ≤ nt u + mt q .
If there is a depth-2 linear circuit C that computes F P , then there is a linear data structure for P with average query time at most s(C)/m and average update time at most s(C)/n. Lemma 1 thus gives a new way to attack circuit lower bounds. We believe the connection between non-adaptive data structures and depth-2 circuits has the potential to yield strong insight to this problem, and that several linear operators conjectured to have strong data structure lower bounds are good candidates for hard circuit problems (for linear or general circuits).
Apart from being interesting lower bounds in their own right, we believe our results shed much light on the inherent difficulties of proving polynomial lower bounds in the cell probe model. In particular the movement of data when performing updates (see the discussion in Section 1.3) appears to be a major obstacle. We conclude in Section 4 with a discussion of our results and potential directions for future research.

Lower Bounds
In this section, we first prove lower bounds for data structures where only the query algorithm is assumed non-adaptive. The problem we study is the indexing problem defined in Section 1.4.
Theorem 4 (Restatement of Theorem 1). Any cell probe data structure solving the indexing problem with a non-adaptive query algorithm must either have t q = Ω(n/w) or t u = Ω(k/w), regardless of the preprocessing time and space usage. Here t q denotes the query time, t u the update time and w the cell size in bits.
We prove this using an encoding argument. Specifically, consider a game between an encoder and a decoder. The encoder receives as input k binary string S 1 , . . . , S k , each of length n and must from this send a message to the decoder. From the message alone, the decoder must uniquely recover all the strings S 1 , . . . , S k . If the strings S 1 , . . . , S k are drawn from a distribution, then the expected length of the message must be at least H(S 1 · · · S k ), or we have reached a contradiction. Here H(·) denotes Shannon entropy.
The idea in our proof is to assume for contradiction that a data structure for the indexing problem exists with a non-adaptive query algorithm that simultaneously has t q = o(n/w) and t u = o(k/w). Using this data structure as a black box, we construct a message that is shorter than H(S 1 · · · S k ), but at the same time, the decoder can recover S 1 , . . . , S k from the message, i.e. we have reached the contradiction. We let the k strings S 1 , . . . , S k given as input to the encoder be uniform random bit strings of length n. Clearly H(S 1 · · · S k ) = kn.
Encoding Procedure. When given the strings S 1 , . . . , S k as input, the encoder first runs the preprocessing algorithm of the claimed data structure on S 1 , . . . , S k . He then examines every possible query index i ∈ [k], and for each i, collects the set of addresses of the cells probed on query i. Since the query algorithm is non-adaptive, these sets of addresses are independent of S 1 , . . . , S k and any updates we might perform on the data structure. Letting C denote the set containing all these addresses for all i, the encoder starts by writing down the concatenation of the contents of all cells with an address in C. This constitutes the first part of the message.
The encoder now runs through every possible update j ∈ [n]. For each j, he runs the update algorithm as if update j was performed on the data structure. While running update j, the decoder appends the contents of the probed cells (as they are when the update reads the cells, not after potential changes) to the constructed message. After processing all j's, the encoder finally sends the constructed message to the decoder. This completes the encoding procedure.
Decoding Procedure. The decoder receives as input the message consisting first of the contents of all cells with an address in C after preprocessing S 1 , . . . , S k . Since the query algorithm is non-adaptive, the decoder knows the addresses of all these cells simply by examining the query algorithm of the claimed data structure. The decoder will now run the update algorithm of every j ∈ [n]. While doing this, he maintains the contents of all cells in C and all cells probed during the updates. Specifically, the decoder does the following: For each j = 1, . . . , n in turn, he starts to run the update algorithm for j. Observe that the contents of each probed cell (before potential changes) can be recovered from the message (the contents appear one after another in the message). This allows the decoder to completely simulate the update algorithm for each j = 1, . . . , n. Note furthermore that for each cell that is probed during these updates, the address can also be recovered simply by examining the update algorithm. In this way, the decoder always knows the contents of all cells in C and all cells probed by the update algorithm as they would have been after preprocessing S 1 , . . . , S k and performing the updates after this preprocessing. While processing the updates j = 1, . . . , n, the decoder also executes a number of queries: After having completely processed an update j, the decoder runs the query algorithm for every i ∈ [k]. Note that the decoder knows the contents of all the probed cells as if the preprocessing on S 1 , . . . , S k had been performed, followed by updates j ′ = 1, . . . , j. This implies that the simulation of the query algorithm for each i ∈ [k] terminates precisely with the answer being the j'th bit of S i . It follows immediately that the decoder can recover every bit of every S i from the message.
Analysis. What remains is to analyze the size of the message. Since by assumption, the query time is t q = o(n/w), the first part of the message has t q kw = o(kn) bits. Similarly, we assumed t u = o(k/w), thus the second part of the message has t u nw = o(kn) bits. Thus the entire message has o(kn) bits. Since H(S 1 · · · S k ) = kn, we have reached our contradiction. This completes the proof of Theorem 1.
Next, we prove lower bounds for data structures where only the update algorithm is assumed to be memoryless, that is, we allow the query algorithm to be adaptive. In this setting, we study the set disjointness problem defined in Section 1.4: Theorem 5 (Restatement of Theorem 2). Any cell probe data structure solving the set disjointness problem with a memoryless update algorithm must have t q = Ω(n/w), regardless of the preprocessing time, space usage and update time. Here t q denotes the query time and w the cell size in bits.
Again, we prove this using an encoding argument. In this encoding proof, we let the input of the encoder be a uniform random set S ⊆ [n]. Clearly H(S) = n bits. We now assume for contradiction that there exists a data structure for the set disjointness problem with a memoryless update algorithm and at the same it has query time t q = o(n/w). The encoder uses this data structure to send a message encoding S in less than n bits, i.e. a contradiction.
Encoding Procedure. When the encoder receives S, he runs the preprocessing algorithm of the claimed data strucutre. Then, he computesS = [n] \ S and insertsS into the data structure as the set T . Finally, the encoder runs the query algorithm and notes the set of cells C probed. Note that by the choice ofS, the query algorithm will output disjoint, and furthermore,S is the largest possible set that will result in a disjoint answer.
The encoding consists of three parts 2 : (i) the addresses of the cells in C, (ii) the contents of the cells in C after preprocessing but before insertingS, and (iii) the contents of the cells in C after insertingS.
Decoding Procedure. The decoder iterates over all sets S ′ ⊆ [n]. Each time, the decoder initializes the contents of cells in C to match the second part of the encoder's message. Then, he inserts each element of S ′ into the data structure, changing the contents of any cell in C where appropriate. When a cell outside of C is to be changed, the decoder does nothing. Since the update algorithm is memoryless, this procedure ends with all cells in C having the same contents as they would have had after preprocessing S and inserting elements of S ′ . Moreover, if the contents match the contents written down in the third part of the encoding, then it must be that S and S ′ are disjoint (we know that the query answers disjoint when the contents of C are like that). When S ′ =S, the contents of C will match the last part of the encoding, and it is trivially the largest set to do so. Thus, the decoder selects the largest set S * whose updates to C match the contents written down in the third part of the encoding. In this way, the decoder recovers S = [n] \ S * .
Analysis. Finally, we analyze the size of the encoding. Since we assumed t q = o(n/w), the encoding has size 3t q w = o(n) bits. But H(S) = n, thus we have reached a contradiction.

Circuits and Non-Adaptive Data Structures
In this section, we demonstrate a strong connection between non-adaptive data structures and the wire complexity of depth-2 circuits. There are edges between input nodes and interior gates, and between interior gates and output gates. Each gate computes an arbitrary function of its inputs. Since non-input nodes compute arbitrary functions, f can be trivially computed using m gates. Instead, we define the size s(C) of a depth-2 ciruit C as the total number of wires in it; i.e., the number of edges in the graph. First, we show how to use the encoding technique common to data structure lower bounds to achieve size bounds for depth-2 circuits. As a proof of concept, we prove such a lower bound for matrix multiplication. We say that a circuit computes matrix multiplication if there are n = 2m inputs, each corresponding to an entry in one of two √ n × √ n binary matrices A and B, and each output gate computes an entry in the product A · B. Arithmetic is in GF(2).
Jukna [10] considered depth-2 circuits and gave an n 3/2 lower bound for circuits computing boolean matrix multiplication. At a high level, his proof proceeds in the following fashion.
2. Prove that for each 1 ≤ ℓ ≤ t, the number of wires leaving inputs from I ℓ plus the number of wires entering outputs in J ℓ must be large.
3. Conclude a large lower bound by summing the terms from Step 2.
Note that since {I ℓ } and {J ℓ } are partitions, they induce a partition on the wires in the circuit. respectively. By ranging over different k, ℓ, Jukna is able to argue that the entropy of matrix multiplication is high. The details of this argument are technical. We give a new proof for Step 2 using an encoding argument. The encoder exploits the circuit operations to encode a √ n × √ n matrix A. The encoded message has length precisely equal to the nubmer of outgoing wires in I ℓ and incoming wires to J ℓ . The argument is very similar to the arguments in Section 2; we leave it to the full version of the paper for lack of space.
Theorem 6. Any circuit C computing boolean matrix multiplication has size s(C) ≥ n 3/2 .
Finally, we provide a strong connection between depth-2 linear circuits and linear data structures. The connection is almost immediately established: Lemma 2 (Restatement of Lemma 1). If there is a linear data structure for a problem P with query time of t q and update time t u , then there exists a depth-2 linear circuit C computing F P with size s(C) ≤ nt u + mt q .
If there is a depth-2 linear circuit C computing F P , then there is a linear data structure for P with average query time at most s(C)/m and average update time at most s(C)/n.
Proof. First, suppose there exists a linear data structure solving P. We construct the corresponding depth-2 circuit directly. Input nodes correspond to the n bits of the input (the array A in the definition of linear data structures). Output nodes correspond to the m possible queries, and there is an interior node for each cell in the database. For each update 1 ≤ i ≤ n (flip an entry of A), add edges from x i to each of the cells updated by the data structure. Similarly, add wires (c i , z j ) whenever the jth query probes the ith cell in the data structure. Correctness follows immediately. Finally, note that since updates and queries probe at most t u and t q cells respectively, the total number of wires in the circuit is bounded by s(C) ≤ nt u + mt q .
Constructing a linear data structure from a linear depth-2 circuit C is similar. Letting t u,i and t q,j denote the number of cells probed during the ith update and jth query respectively, it is easy to see that s(C) = n i=1 t u,i + m j=1 t q,j . It follows that the average update time is at most 1 n t u,i ≤ s(C)/n, and similarly that the average query time is at most 1 m t q,j ≤ s(C)/m.
The main contribution of Lemma 2 is a new range of candidate hard problems for linear circuits, all inspired by data structure problems. As mentioned in Section 1.3, linear data structures most naturally occur in the field of range searching. Furthermore, these data structure problems turn out to correspond precisely to linear operators: Let P = {p 1 , . . . , p n } be a fixed set of n points in R d , and let R be a set of query ranges, where each R i ∈ R is a subset of R d . P and R naturally define a linear operator A(P, R) ∈ {0, 1} |R|×|P | , where the ith row of A(P, R) has a 1 in the jth column if p j ∈ R i and 0 otherwise. In the light of Lemma 2, assume a linear data structure solves the following range counting problem: Given the fixed set of points P , each assigned a weight in {0, 1}, support flipping the weights of the points (intuitively inserting/deleting the points) while also supporting to efficiently compute the parity of the weights assigned to the points inside a query range R i ∈ R. Then that linear data structure immediately translates into a linear circuit for the linear operator A(P, R) and vice versa. Thus we expect that hard range searching problems of the above form also provide hard linear operators for linear circuits. The seemingly hardest range searching problem is simplex range searching, where we believe that the following holds: Conjecture 1. There exists a constant ε > 0, a set R of Θ(n) simplices in R d and a set of n points in R d , such that any data structure solving the above range counting problem (flip weights, parity queries), must have average query and update time t u t q = Ω(n ε ).
We have toned down Conjecture 1 somewhat, since the community generally believe ε can be replaced by 1 − 1/d, but to be on the safe side we only conjecture the above. In the circuit setting, this conjecture translates to Corollary 3. If Conjecture 1 is true for linear data structures, then there exists a constant δ > 0, a set R of Θ(n) simplices in R d and a set P of n points, such that any linear circuit computing the linear operator A(P, R) must have Ω(n 1+δ ) wires.
Furthermore, the research on data structure lower bounds also provide a lot of insight into which concrete sets P and R that might be difficult. More specifically, polynomial lower bounds for simplex range searching has been proved for: range reporting in the pointer machine [5,1] and I/O-model [1], range searching in the semi-group model [3] and range searching in the group model [11,14]. The group model comes closest in spirit to linear data structures. A data structure in the group model is essentially a linear data structure, where instead of storing linear combinations over GF(2), we store linear combinations with integer coefficients (and no mod operations). Similarly, queries are answered by computing linear combinations over the stored elements, but with integer coefficients and not over GF (2). The properties used to drive home range searching lower bounds in the group model are: • If A(P, R) has polynomial red-blue discrepancy, then any group model data structure must have t u t q = Ω(n ε ) for some constant ε > 0.
• If A(P, R) has Ω(n) eigenvalues that are polynomial, then any group model data structure must have t u t q = Ω(n ε ) for some constant ε > 0.
• If |R i ∩ P | is polynomial for all R i ∈ R and |R i ∩ R j ∩ P | = O(1) for all i = j, then any group model data structure must have t u t q = Ω(n ε ) for some constant ε > 0.
The last property directly translates to A(P, R) having rows and columns with polynomially many 1s and any two rows/columns having a constant number of 1s in common. Given the tight correspondence between group model data structures and linear data structures, we believe these properties are worth investigating in the circuit setting. Furthermore, a concrete set of n points P and a set of Θ(n) simplices R, with all three properties, is known even in R 2 . This example can be found in [4], where it is stated for R being lines (i.e. degenerate simplices). Note that the lower bound in [4] is for range reporting in the pointer machine, but using the observations in [11,14] it is easily seen that all the above properties hold. Even if these properties are not enough to obtain lower bounds for linear operators, we believe the geometric approach might be useful in its own right.

Conclusion
In this paper, we have studied the role non-adaptivity plays in dynamic data structures. Surprisingly, we were able to prove polynomially high lower bounds for such data structures. Perhaps more importantly, we believe our results shed much new light on the current polylogarithmic barriers if we do not make any restrictions on data structures. We also presented an interesting connection between data structures and depth-2 circuits. The connection between linear operators and range searching is particularly intriguing, revealing a number of new properties to investigate further in the realm of circuit lower bounds.

Acknowledgements
We are grateful to Elad Verbin for several helpful discussions.
Analysis. The first part of the encoding consists of the output of each interior gate adjacent to at least one output in J ℓ . Thus, the first part of the encoding can be described in at most t q,ℓ bits. The second part of the encoding consists of the output of each interior gate adjacent to each input node in I ℓ . This requires at most t u,ℓ bits. Thus, the total length of the encoding is at most t u,ℓ + t q,ℓ . The decoder recovers all of M from this message. Since each entry of M is independent and uniform, H(M ) = n. Thus, t u,ℓ + t q,ℓ ≥ n.
Remark. As mentioned previously, Jukna proves his lower bounds by defining the entropy of an operator. He lower bounds the wire complexity of a circuit by the entropy of the operator it computes. He proves a lower bound on the entropy of an operator by carefully analyzing subfunctions of the operator, created by fixing subsets of the variables to specific values and considering the induced function on the remaining variables.
Parts of Jukna's proof are similar in spirit to ours. In particular, the way we encode M by fixing the matrix B to be one in entry [k, ℓ] and zero elsewhere corresponds to the subfunctions Jukna considers in his proof. In fact, we believe that any lower bound provable using Jukna's technique can also be proved using our method. Our advantage is in replacing Jukna's technical and somewhat complicated machinery with a simple encoding argument.