Algebraic dependencies and PSPACE algorithms in approximative complexity

Testing whether a set $\mathbf{f}$ of polynomials has an algebraic dependence is a basic problem with several applications. The polynomials are given as algebraic circuits. Algebraic independence testing question is wide open over finite fields (Dvir, Gabizon, Wigderson, FOCS'07). The best complexity known is NP$^{\#\rm P}$ (Mittmann, Saxena, Scheiblechner, Trans.AMS'14). In this work we put the problem in AM $\cap$ coAM. In particular, dependence testing is unlikely to be NP-hard and joins the league of problems of"intermediate"complexity, eg. graph isomorphism&integer factoring. Our proof method is algebro-geometric-- estimating the size of the image/preimage of the polynomial map $\mathbf{f}$ over the finite field. A gap in this size is utilized in the AM protocols. Next, we study the open question of testing whether every annihilator of $\mathbf{f}$ has zero constant term (Kayal, CCC'09). We give a geometric characterization using Zariski closure of the image of $\mathbf{f}$; introducing a new problem called approximate polynomials satisfiability (APS). We show that APS is NP-hard and, using projective algebraic-geometry ideas, we put APS in PSPACE (prior best was EXPSPACE via Grobner basis computation). As an unexpected application of this to approximative complexity theory we get-- Over any field, hitting-set for $\overline{\rm VP}$ can be designed in PSPACE. This solves an open problem posed in (Mulmuley, FOCS'12, J.AMS 2017); greatly mitigating the GCT Chasm (exponentially in terms of space complexity).


Introduction
Algebraic dependence is a generalization of linear dependence.Polynomials f 1 , . . ., f m ∈ F[x 1 , . . ., x n ] are called algebraically dependent over field F if there exists a nonzero polynomial (called annihilator ) A(y 1 , . . ., y m ) ∈ F[y 1 , . . ., y m ] such that A(f 1 , . . ., f m ) = 0.If no A exists, then the given polynomials are called algebraically independent over F. The transcendence degree (trdeg) of a set of polynomials is the analog of rank in linear algebra.It is defined as the maximal number of algebraically independent polynomials in the set.Both algebraic dependence and linear dependence share combinatorial properties of the matroid structure [ER93].The algebraic matroid examples may not be linear (esp.over F p ) [Ing71].
The simplest examples of algebraically independent polynomials are x 1 , . . ., x n ∈ F[x 1 , . .., x n ].As an example of algebraically dependent polynomials, we can take f 1 = x, f 2 = y and f 3 = x 2 + y 2 .Then, y 2 1 + y 2 2 − y 3 is an annihilator.The underlying field is crucial in this concept.For example, polynomials x + y and x p + y p are algebraically dependent over F p , but algebraically independent over Q.
Thus, the following computational question AD(F) is natural and it is the first problem we consider in this paper: Given algebraic circuits f 1 , . . ., f m ∈ F[x 1 , . . ., x n ], test if they are algebraically dependent.It can be solved in PSPACE using a classical result due to Perron [Per27,P lo05,Csa76].Perron proved that given a set of algebraically dependent polynomials, there exists an annihilator whose degree is upper bounded by the product of the degrees of the polynomials in the set.This exponential degree bound on the annihilator is tight [Kay09].
The annihilator may be quite hard, but it turns out that the decision version is easy over zero (or large) characteristic using a classical result known as the Jacobian criterion [Jac41,BMS13].The Jacobian efficiently reduces algebraic dependence testing of f 1 , . . ., f m over F to linear dependence testing of the differentials df 1 , . . ., df m over F(x 1 , . . ., x n ), where we view df i as the vector ( ∂f i ∂x 1 , . . ., ∂f i ∂xn ).Placing df i as the i-th row gives us the Jacobian matrix J of f 1 , . . ., f m .If the characteristic of the field is zero (or larger than the product of the degrees deg(f i )) then the trdeg equals rank(J).It follows from [Sch80] that, with high probability, rank(J) is equal to the rank of J evaluated at a random point in F n .This gives a simple randomized polynomial time algorithm solving AD(F) for certain F.
For fields of positive characteristic, if the polynomials are algebraically dependent, then their Jacobian matrix is not full rank.But the converse is not true.There are infinitely many input instances (set of algebraically independent polynomials) for which Jacobian fails.The failure can be characterized by the notion of 'inseparable extension' [PSS16].For example, x p , y p are algebraically independent over F p , yet their Jacobian determinant vanishes.Another example is, {x p−1 y, xy p−1 } over F p for prime p > 2. [MSS14] gave a criterion, called Witt-Jacobian, that works over fields of prime characteristic p; improving the complexity of independence testing problem from PSPACE to NP #P .[PSS16] gave another generalization of Jacobian criterion that is efficient in special cases.
Given that an efficient algorithm to tackle prime characteristic is not in close sight, one could speculate the problem to be NP-hard or even outside the polynomial hierarchy PH.In this work we show that: For finite fields, AD(F) is in AM ∩ coAM (Theorem 1).This rules out the possibility of NP-hardness, under standard complexity theory assumptions [AB09].
Constant term of the annihilators.We come to the second problem AnnAtZero that we discuss in this paper: Testing if the constant term of every annihilator, of the set of algebraic circuits f = {f 1 , . . ., f m }, is zero.Note that the annihilators of f constitute an ideal of the polynomial ring F[y 1 , . . ., y m ]; this ideal is principal when trdeg of f is m − 1 [ Kay09,Lem.7].In this case, we can decide if the constant term of the minimal annihilator is zero in PSPACE, as the unique annihilator (up to scaling) can be computed in PSPACE.
If trdeg of f is less than m−1, the ideal of the annihilators of f is no longer principal.Although the ideal is finitely generated, finding the generators of this ideal is computationally very hard.(Eg.using Gröbner basis techniques, we can do it in EXPSPACE [DK15, Sec.1.2.1].)In this case, can we decide if all the annihilators of f have constant term zero?We give two equivalent characterizations of AnnAtZero-one geometric and the other algebraic -using which we devise a PSPACE algorithm to solve it in all cases (Theorem 2).
Interestingly, there is an algebraic-complexity application of the above algorithm.We give a PSPACE-explicit construction of a hitting-set of the class VP F q (Theorem 3).VP F q consists of nvariate degree d = n O(1) polynomials, over the field F q , that can be 'infinitesimally approximated' by size s = n O(1) algebraic circuits.This problem is interesting as natural questions like explicit construction of the normalization map (in Noether's Normalization Lemma NNL) reduce to the construction of a hitting-set of VP [Mul17]; which was previously known to be only in EXPSPACE [Mul17,Mul12].This was recently improved greatly, over the field C, by [FS17].Their proof technique uses real analysis and does not apply to finite fields.We need to develop purely algebraic concepts to solve the finite field case (namely through AnnAtZero), which then apply to any field.
To further motivate the concept of algebraic dependence, we list a few recent problems in computer science.The first problem is about constructing an explicit randomness extractor for sources which are polynomial maps over finite fields.Using Jacobian criterion, [DGW09,Dvi09] solved the problem for fields with large characteristic.The second application is in the famous polynomial identity testing (PIT) problem.To efficiently design hitting-sets, for some interesting models, [BMS13, ASSS12, KS16] constructed a family of trdeg-preserving maps.For more background and applications of algebraic dependence testing, see [PSS16].The annihilator has been a key concept to prove the connection between hitting-sets and lower bounds [HS80], and in bootstrapping 'weak' hitting-sets [AGS17].

Our results
In this paper, we give Arthur-Merlin protocols & algorithms, with proofs using basic tools from algebraic geometry.The first theorem we prove is about AD(F q ).Theorem 1. Algebraic dependence testing of circuits in This result vastly improves the current best upper bound known for AD(F q )-from being 'outside' the polynomial hierarchy (namely NP #P [MSS14]) to 'lower' than the second-level of polynomial hierarchy (namely AM ∩ coAM).This rules out the possibility of AD(F q ) being NP-hard (unless polynomial hierarchy collapses to the second-level [AB09]).Recall that, for zero or large characteristic F, AD(F) is in coRP (Section 2).We conjecture such a result for AD(F q ) too.
Our second result is about the problem AnnAtZero (i.e.testing whether all the annihilators of given f have constant term zero).A priori it is unclear why it should have complexity better than EXPSPACE (note: ideal membership is EXPSPACE-complete [MM82]).Firstly, we relate to a (new) version of polynomial system satisfiability, over the algebraic closure F: It is easy to show: Function field F(ε) here can be equivalently replaced by Laurent polynomials F[ε, ε −1 ], or, the field F((ε)) of formal Laurent series (use mod εF[ε]).A reason why these objects appear in algebraic complexity can be found in [Bür04, Sec.5.2] & [LL89, Sec.5].They help algebrize the notion of 'infinitesimal approximation' (in real analysis think of ε → 0 & 1/ε → ∞).A notable computational issue involved is that the degree bound of ε required for β is exponential in the input size [LL89, Prop.3]; this may again be a "justification" for APS requiring that much space.
Classically, the exact version of APS has been extremely well-studied-Does there exist β ∈ F n such that for all i, f i (β) = 0? This is what Hilbert's Nullstellensatz (HN) characterizes and yields an impressive PSPACE algorithm [Koi96,Kol88].Note that if system f has an exact solution, then it is trivially in APS.But the converse is not true.For example, {x, xy − 1} is in APS, but there is no exact solution in F. To see the former, assign x = ε and y = 1/ε.Also, the instance {x, x + 1} is neither in APS nor has an exact solution.Finally, note that if we restrict β to come from F[ε] n then APS becomes equivalent to exact satisfiability and HN applies.This can be seen by going modulo εF[ε], as the quotient Coming back to AnnAtZero, we show that it is equivalent both to a geometric question and to deciding APS.This gives us, with more work, the following surprising consequence.
Theorem 2. APS is NP-hard and is in PSPACE.
We apply this to design hitting-sets and solving NNL (refer [Mul17] for the background).
Theorem 3.There is a PSPACE algorithm that (given input n, s, r in unary & suitably large F q ) outputs a set, of points from F n q of size poly(nsr, log q), that hits all n-variate degree-r polynomials over F q that can be infinitesimally approximated by size s circuits.

More applications?
The exact polynomials satisfiability question HN (over F) is highly expressive and, naturally, most computer science problems get expressed that way.We claim that in a similar spirit, the APS question expresses those computer science problems that involve 'infinitesimal approximation'.One prominent example is the concept of border rank of tensor polynomials (used in matrix multiplication algorithms and GCT, see [BCS13,Lan12,LG14]). Border rank computation of a given tensor (over F) can easily be reduced to an APS instance and, hence, now solved in PSPACE; this matches the complexity of tensor rank itself [S Š17].From the point of view of Gröbner basis theory, APS is a problem that seems a priori much harder than HN.Now that both of them have a PSPACE algorithm, one may wonder whether it can be brought all the way down to NP or AM? (In fact, HN C is known to be in AM, conditionally under GRH [Koi96].) Our methods in the proof of Theorem 2 imply an interesting "degree bound" related to the (prime) ideal I of annihilators of polynomials f .Namely, I = I ≤d , where I ≤d refers to the subideal generated by degree ≤ d polynomials of I, d is the Perron-like bound (max i∈[m] deg(f i )) k , and k := trdeg(f ).This is equivalent to the geometric fact, which we prove, that the varieties defined by the two ideals I and I ≤d are equal (Theorem 17).This again is an exponential improvement over what one expects to get from the general Gröbner basis methods; because, the generators of I may well have doubly-exponential degree.
The hitting-set result (Theorem 3) can be applied to compute, in PSPACE, the explicit system of parameters (esop) of the invariant ring of the variety ∆[det, s], over F q , with a given group action [Mul17, Thm.4.9].Also, we can now construct, in PSPACE, polynomials in F q [x 1 , . . ., x n ] that cannot even be approximated by 'small' algebraic circuits.Such results were previously known only for characteristic zero fields, see [FS17, Thms.1.1-1.4].Bringing this complexity down to P is the longstanding problem of blackbox PIT (& lower bounds), see [Sax09,SY10,Sax13].Mulmuley [Mul12] pointed out that small hitting-sets for VP can be designed in EXPSPACE which is a far worse complexity than that for VP.He called it the GCT Chasm.We bridge it somewhat, as the proof of Theorem 3 shows that small hitting-sets for VP F can be designed in PSPACE (like those for VP) for any field F.

Proof ideas
Proof idea of Theorem 1. Suppose we are given algebraic circuits f := {f 1 , . . ., f m } computing in F q [x 1 , . . ., x n ].For the AM and coAM protocols, we consider the following system of equations over a 'small' extension F q ′ : For b = (b 1 , . . ., b n ) ∈ F n q ′ , define the system of equations We denote the number of solutions of the above system in F n q ′ as N b .Let f : F n q ′ → F m q ′ be the polynomial map a → (f 1 (a), . . ., f m (a)).
AM gap.[Theorem 9] We establish bounds for the number N f (a) , where a is a random point in F n q ′ .If f 1 , . . ., f m are independent, we show that N f (a) is relatively small.Whereas, if the polynomials are algebraically dependent then N f (a) is much more.
Assume f are algebraically independent.Wlog (see the full version of [PSS16, Sec.2]) we can assume that m = n and for all i ∈ [n], {x i , f 1 , . . ., f n } are algebraically dependent.The first step is to show that the zeroset defined by the system of equations, for random f (a), has dimension ≤ 0. This is proved using the Perron degree bound on the annihilator of {x i , f 1 , . . ., f n }.Next, one can apply an affine version of Bezout's theorem to upper bound N f (a) .On the other hand, suppose f are algebraically dependent, say with annihilator Q.Let Im(f ) := f (F n q ′ ) be the image of f .Since Q vanishes on Im(f ), we know that Im(f ) is relatively small, whence we deduce that N f (a) is large for 'most' a's.
coAM gap.[Theorem 12] We pick a random point b = (b 1 , . . ., b m ) ∈ F m q ′ and bound N b , which is the number of solutions of the system defined above.In the dependent case, we show that N b = 0 for 'most' b's.But in the independent case, we show that N b ≥ 1 for 'many' (may be not 'most' !) b's.The ideas are based on those sketched above.
The two kinds of gaps shown above are based on the set f −1 (f (x)) resp.Im(f ).Note that membership in either of these sets is testable in NP (the latter requires nondeterminism).Based on this and the gaps between the respective cardinalities, we can invoke Lemma 4 and devise the AM and coAM protocols for AD(F q ′ ), which also apply to AD(F q ).
Remark-One advantage in our problem is that we could sample a random point in the set Im(f ).In contrast, it is not clear how to sample a random point in the zeroset Zer(f . Thus, we manage to side-step the NP-hardness associated with most zeroset properties.Eg. computing the dimension of Zer(f ) is NP-hard.
Proof idea of Theorem 2. Let algebraic circuits f := {f 1 , . . ., f m } in F[x 1 , . . ., x n ] be given over a field F. We want to determine if the constant term of every annihilator for f is zero.Redefine the polynomial map f : ).For a subset S of an affine (resp.projective) space, write S for its Zariski closure in that space, i.e. it is the smallest subset that contains S and equals the zeroset Zer(I) of some polynomial ideal I. APS vs AnnAtZero.[Theorem 14] Now, we interpret the problem AnnAtZero in a geometric way through Lemma 13: The constant term of every annihilator of f is zero iff the origin point 0 ∈ Im(f ).This has a simple proof using the ideal-variety correspondence [Har92].Note that the stronger condition 0 ∈ Im(f ) is equivalent to the existence of a common solution to the equations f i (x 1 , . . ., x n ) = 0, i = 1, . . ., m.The latter problem (call it HN for Hilbert's Nullstellensatz) is known to be in AM if F = Q and GRH is assumed [Koi96].However, Im(f ) is not necessarily Zariski closed; equivalently, it may be strictly smaller than Im(f ).So, we need new ideas to test 0 ∈ Im(f ).
Next, we observe that although 0 ∈ Im(f ) is not equivalent to the existence of a solution x ∈ F n to f (x) = 0, it is equivalent to the existence of an "approximate solution" x ∈ F(ε) n , which is an n-tuple of rational functions in a formal variable ε.The proof idea of this uses a degree bound on ε due to [LL89].We called this problem APS.As AnnAtZero problem is already known to be NP-hard [Kay09], APS is also NP-hard.Upper bounding APS.We now know that: Solving APS for f is equivalent to solving AnnAtZero for f .AnnAtZero was previously known to be in PSPACE in the special case when the trdeg k of F(f )/F equals m or m − 1, but the general case remained open (best being EXPSPACE).
In this work we prove that AnnAtZero is in PSPACE even when k < m − 1.Our simple idea is to reduce the input to a smaller m = k + 1 instance, by choosing new polynomials g 1 , . . ., g k+1 that are random linear combinations of f i 's.We show that with high probability, replacing {f 1 , . . ., f m } by {g 1 , . . ., g k+1 } preserves YES/NO instances as well as the trdeg.This gives a randomized poly-time reduction from the case k < m − 1 to k = m − 1 (Theorem 17).The latter has a standard PSPACE algorithm.
For notational convenience view F as the affine line A. Define V := Im(f ) ⊆ A m .Proving that the above reduction (of m) does preserve YES/NO instances amounts to proving the following geometric statement: If V does not contain the origin O ∈ A m , then with high probability, the variety V ′ := π(V ) does not contain the origin O ′ ∈ A k+1 either, where π : A m → A k+1 is a random linear map.
As π is picked at random, the kernel W of π is a random linear subspace of A m .We have O ′ ∈ π(V ) whenever V ∩ W = ∅, but this is not sufficient for proving O ′ ∈ π(V ), since V may "get arbitrarily close to W " in A m and meet W "at infinity".Inspired by this observation, we consider projective geometry instead of affine geometry, and prove that O ′ ∈ V ′ holds as long as the projective closure of V and that of W are disjoint.The proof uses the construction of a projective subvariety-the join -to characterize π −1 (V ′ ), and eventually rules out Moreover, we show that this holds with high probability if O ∈ V : by (repeatedly) using the fact that a generic (=random) hyperplane section reduces the dimension of a variety by one.
Proof idea of Theorem 3. Define A := F q and assume wlog q ≥ Ω(sr 2 ) [AL86].[HS80,Thm.4.4] showed that a hitting-set, of size h := O(s 2 n 2 log q) in F n q , exists for the class of degree-r polynomials, in A[x 1 , . . ., x n ], that can be infinitesimally approximated by size-s algebraic circuits.So, we can search over all possible subsets of size h from F n q and 'most' of them are hitting-sets.How do we certify that a candidate set H is a hitting-set?The idea is to use universal circuits.A universal circuit has n essential variables x = {x 1 , . . ., x n } and s ′ := O(sr 4 ) auxiliary variables y = {y 1 , . . ., y s ′ }.We can fix the auxiliary variables, from A(ε), in such a way so that it can output any homogeneous circuit of size-s, approximating a degree-r polynomial in VP A .Given a universal circuit Ψ, certification of a hitting-set H is based on the following observation, that follows from the definitions: Note that this hitting-set certification is more challenging than the one against polynomials in VP; because the degree bounds for ε are exponentially high and moreover, we do not know how to frame the first 'non-containment' condition as an APS instance.To translate it to an APS instance, our key idea is the following.
Pick q ≥ Ω(s ′ r 2 ) so that a hitting-set exists, in F n q , that works against polynomials approximated by the specializations of Ψ. Suppose Ψ(α, x) is not in εA[ε][x], for some α ∈ A(ε) s ′ .This means that we can write it as −m≤i≤m ′ ε i g i (x) with g −m = 0 and m ≥ 0. Clearly, ε m • Ψ(α, x) infinitesimally approximates the nonzero polynomial g −m ∈ A[x].By the conditions on Ψ, we know that g −m is a homogeneous degree-r polynomial (and approximative complexity s ′ ).Thus, by [Sch80], there exists a β ∈ F n q such that g −m (β) =: a is a nonzero element in A. We can normalize by this and consider a −1 ε m • Ψ(y, x), which evaluates to 1 + εA[ε] at (α, β).Since this normalization factor only affects the auxiliary variables y, we get another equivalent criterion: Candidate set We reached closer to APS, but how do we implement ∃?x ∈ F n q (it is an exponential space)?The idea is to rewrite it, instead using the (r + 1)-th roots of unity Z r+1 ⊂ A, as: . This gives us a criterion that is an instance of APS with n + h + 1 input polynomials (Theorem 21).By Theorem 2 it can be done in PSPACE; finishing the proof.Moreover, this PSPACE algorithm idea is independent of the field characteristic.(Eg. it can be seen as an alternative to [FS17] over the complex field.)

Preliminaries
Jacobian.Although this work would not need it, we define the classical Jacobian: For polynomials Jacobian criterion [Jac41,BMS13] states: For degree ≤ d and trdeg ≤ r polynomials f , if char(F) = 0 or char(F) > d r , then trdeg(f ) = rank F(x) J x (f ).This yields a randomized poly-time algorithm [Sch80].For other fields, Jacobian criterion fails due to inseparability and AD(F) is open.
AM protocol.Arthur-Merlin class AM is a randomized version of the class NP (see [AB09]).Arthur-Merlin protocols, introduced by Babai [Bab85], can be considered as a special type of interactive proof system in which the randomized poly-time verifier (Arthur) and the all-powerful prover (Merlin) have only constantly many rounds of exchange.AM contains interesting problems like determining if two graphs are non-isomorphic.AM ∩ coAM is the class of decision problems for which both YES and NO answers can be verified by an AM protocol.It can be thought of as the randomized version of NP ∩ coNP.See [KS06] for a few natural algebraic problems in AM ∩ coAM.If such a problem is NP-hard (even under random reductions) then polynomial hierarchy collapses to the second-level, i.e.PH= Σ 2 .
In this work AM protocol will only be used to distinguish whether a set S is 'small' or 'large'.Formally, we refer to the Goldwasser-Sipser Set Lowerbound method: Lemma 4. [AB09, Chap.9]Let m ∈ N be given in binary.Suppose S is a set whose membership can be tested in nondeterministic polynomial time and its size is promised to be either ≤ m or ≥ 2m.
Then, the problem of deciding whether |S| ?≥ 2m is in AM.
Geometry.Due to limited space we have moved the geometry preliminaries to Appendix A. One can also refer to a standard text, eg.[Har92,Har13].Basically, we need terms about affine (resp.projective) zerosets and the underlying Zariski topology.The latter gives a way to 'impose' geometry even in very discrete situations, eg.finite fields in this work.

Algebraic dependence testing: Proof of Theorem 1
Given f 1 , . . ., f m ∈ F q [x 1 , . . ., x n ], we want to decide if they are algebraically dependent.For this problem AD(F q ) we could assume, with some preprocessing, that m = n.For, m > n means that its a YES instance.If m < n then we could apply a 'random' linear map on the variables to reduce them to m, preserving the YES/NO instances.Also, the trdeg does not change when we move to the algebraic closure F q .The details can be found in [PSS16, Lem.2.7-2.9].So, we assume the input instance to be f := {f 1 , . . ., f n } with nonconstant polynomials.
In the following, let Let d ∈ N + and q ′ = q d .The value of d will be determined later.Let f : y 1 , . . ., y n ] be a nonzero annihilator, of minimal degree, of f 1 , . . ., f n .If it exists then deg(Q) ≤ D by Perron's bound.

AM protocol
First, we study the independent case.
Lemma 5 (Dim=0 preimage).Suppose f are independent.Then N ′ f (a) is finite for all but at most is finite for such a.To see this, note that for any b = (b 1 , . . ., b n ) ∈ A n satisfying the equations The claim now follows from the union bound.
We need the following affine version of Bézout's Theorem.Its proof can be found in [Sch95, Thm.3.1].
Combining Lemma 5 with Bézout's Theorem, we obtain Lemma 7 (Small preimage).Suppose f are independent.Then N f (a) ≤ D for all but at most (nDD ′ /q ′ )-fraction of a ∈ F n q ′ .
Next, we study the dependent case (with an annihilator Q).
Lemma 8 (Large preimage).Suppose f are dependent.Then for k > 0, we have N f (a) > k for all but at most (kD/q ′ )-fraction of a ∈ F n q ′ .
Proof.Let Im(f ) := f (F n q ′ ) be the image of the map.Note that Q vanishes on all the points in Im(f ).So, |Im(f )| ≤ Dq ′n−1 by [Sch80].
Let B := {b ∈ Im(f ) : N b ≤ k} be the "bad" images.We can estimate the bad domain points as, #{a Theorem 9 (AM).Testing algebraic dependence of f is in AM.
Proof.Fix q ′ = q d > 4nDD ′ + 4kD and k := 2D.Note that d will be polynomial in the input size.For an a ∈ F n q ′ , consider the set is more than 2D in the dependent case while ≤ D in the independent case.Note that an upper bound on i∈[n] deg(f i ) can be deduced from the size of the input circuits for f i 's; thus, we know D.Moreover, containment in f −1 (f (a)) can be tested in P. Thus, by Lemma 4, AD(F q ) is in AM.

coAM protocol
We again study the independent case wrt a different point in the range of f .Lemma 10 (Large image).Suppose f are independent.Then N b > 0 for at least Proof.Let S := {a ∈ F n q ′ : N f (a) ≤ D}.Then |S| ≥ (1 − nDD ′ q ′−1 ) • q ′n by Lemma 7. As every b ∈ f (S) has at most D preimages in S under f , we have |f (S)| ≥ |S|/D ≥ (D −1 − nD ′ q ′−1 ) • q ′n .This proves the lemma since N b > 0 for all b ∈ f (S).
Next, we study the dependent case.
Lemma 11 (Small image).Suppose f are dependent.Then N b = 0 for all but at most (D/q ′ )- Proof.By definition: It was shown in the proof of Lemma 8 that |Im(f )| ≤ Dq ′n−1 .The lemma follows.
Proof.Fix q ′ = q d > D(2D + nD ′ ).Note that d will be polynomial in the input size.For b ∈ F n q ′ , consider the set Thus, by Lemma 10 (resp.Lemma 11), |S| ≥ (D −1 − nD ′ q ′−1 )q ′n > 2Dq ′n−1 (resp.|S| ≤ Dq ′n−1 ) when f are independent (resp.dependent).Note that an upper bound on i∈[n] deg(f i ) can be deduced from the size of the input circuits for f i 's; thus, we know Dq ′n−1 .Moreover, containment in S can be tested in NP.Thus, by Lemma 4, AD(F q ) is in coAM.
Proof of Theorem 1.The statement immediately follows from Theorems 9 & 12.
4 Approximate polynomials satisfiability: Proof of Theorem 2 Theorem 2 is proved in two parts.First, we show that APS is equivalent to AnnAtZero problem; which means that it is NP-hard [Kay09].Next, we utilize the beautiful underlying geometry to devise a PSPACE algorithm.

APS is equivalent to AnnAtZero
Let A be the algebraic closure of F. Note that for the given polynomials there is an annihilator over F with nonzero constant term iff there is an annihilator over A with nonzero constant term.This is because if Q is an annihilator over A with nonzero constant term, wlog 1, then (by basic linear algebra) the linear system in terms of the (unknown) coefficients of Q would also have a solution in F. Thus, there is an annihilator over F with constant term 1.This proves that it suffices to solve AnnAtZero over the algebraically closed field A. This provides us with a better geometry.
Write f : A n → A m for the polynomial map sending a point x = (x 1 , . . ., x n ) ∈ A n to (f 1 (x), . . ., f m (x)) ∈ A m .For a subset S of an affine or projective space, write S for its Zariski closure in that space.We will use O to denote the origin 0 of an affine space.
The following lemma reinterprets APS in a geometric way.
Lemma 13 (O in the closure).The constant term of every annihilator for f is zero iff O ∈ Im(f ).
Proof.Note that: As an interesting corner case, the above lemma proves that whenever f are algebraically independent, we have A m = Im(f ).Eg. f 1 = X 1 and f 2 = X 1 X 2 − 1.Even in the dependent cases, Im(f ) is not necessarily closed in the Zariski topology.
Example 1.Let n = 2, m = 3.Consider f 1 = f 2 = X 1 and f 3 = X 1 X 2 − 1.The annihilators are multiples of (Y 1 − Y 2 ), which means by Lemma 13 that O ∈ Im(f ).But there is no solution to , it is equivalent to the existence of an "approximate solution" x ∈ A[ε, ε −1 ] n , which is a tuple of Laurent polynomials in a formal variable ε.The formal statement is as follows.Wlog we assume f to be m nonconstant polynomials.
Theorem 14 Moreover, when such x exists, it may be chosen such that where The proof of Theorem 14 is almost the same as that in [LL89].First, we recall a tool to reduce the domain from a variety to a curve, proven in [LL89].
Lemma 15. [LL89, Prop.1]Let V ⊆ A n , W ⊆ A m be affine varieties, ϕ : V → W dominant, and t ∈ W \ ϕ(V ).Then there exists a curve C ⊆ A n such that t ∈ ϕ(C) and deg(C) ≤ deg(Γ ϕ ), where Γ ϕ denotes the graph of ϕ embedded in A n × A m .
Next, [LL89] essentially shows that in the case of a curve one can approximate the preimage of f by using a single formal variable ε and working in A(ε).
Finally, we can use the above two lemmas to prove the connection of APS with O ∈ Im(f ), and hence with AnnAtZero (by Lemma 13).
Proof of Theorem 14.First assume that an x, satisfying the conditions in Theorem 14, exists.Pick such an x.If f are algebraically independent then by Lemma 13 we have that A m = Im(f ) and we are done.So, assume that there is a nonzero annihilator Q for f .We have , which is the constant term of So it equals zero.By Lemma 13, we have O ∈ Im(f ) and again we are done.
Conversely, assume O ∈ Im(f ) and we will prove that x exists.If O ∈ Im(f ), then we can choose x ∈ A n and we are done.So assume be as given by the lemma.Then For i ∈ [n], let x i be the Laurent polynomial obtained from p i by truncating the terms of degree greater than D ′ .When evaluating f 1 , . . ., f m , at (p 1 , . . ., p n ), such truncation does not affect the coefficient of ε k for k ≤ 0 by the choice of D ′ .So Remark-The lower bound −D = − m i=1 deg(f i ) for the least degree of x i in ε can be achieved up to a factor of 1 + o(1).Consider the polynomials m, where m = n + 1.Then we are forced to choose x 1 ∈ εA[ε] and

Putting APS in PSPACE
Owing to the exponential upper bound on the precision (= degree wrt ε) shown in Theorem 14, one expects to solve APS in EXPSPACE only.Surprisingly, in this section, we give a PSPACE algorithm.This we do by reducing the general AnnAtZero instance to a very special instance, that is easy to solve.
Let A be the algebraic closure of the field F. Let f 1 , . . ., f m ∈ F[X 1 , . . ., X n ] be given.Denote by k the trdeg of F(f 1 , . . ., f m )/F.Computing k can be done in PSPACE using linear algebra [P lo05, Csa76].We assume k < m − 1, since the cases k = m − 1 and k = m are again easy to solve in PSPACE using linear algebra.
We reduce the number of polynomials from m to k + 1 as follows: Fix a finite subset S ⊆ F, and choose c i,j ∈ S at random for i ∈ [k + 1] and j ∈ [m].For this to work, we need a large enough S and F.
Our algorithm is immediate once we prove the following claim.
First, we reformulate the two items of Theorem 17 in a geometric way, and later we will analyze the error probability.
For d ∈ N, denote by A d (resp.P d ) the d-dimensional affine space (resp.projective space) over A := F. Let f : A n → A m (resp.g : A n → A k+1 ) be the polynomial map sending x to (f 1 (x), . . ., f m (x)) (resp.(g 1 (x), . . ., g k+1 (x))).Let O and O ′ be the origin of A m and that of A k+1 respectively.Define the affine varieties Let π : A m → A k+1 be the linear map sending (x 1 , . . ., x m ) to (y 1 , . . ., y k+1 ) where We will give sufficient conditions of (1) and (2) in terms of incidence properties.Note that Example 2 in the appendix.
To overcome this problem, we consider projective geometry instead of affine geometry.Suppose A m have coordinates X 1 , . . ., X m and P m have homogeneous coordinates X 0 , . . ., X m .Regard A m as a dense open subset of For distinct points P, Q ∈ P m , write P Q for the projective line passing through them.
Lemma 18 (Sufficient condns).We have: (1) dim Har92,Thm.11.12].Denote by Y and Z the projective closure of π −1 (P ) and that of (2): Assume to the contrary that Denote by J(V c , W H ) the join of V c and W H , which is defined to be the union of the projective lines P Q, where P ∈ V c and Q ∈ W H .It is known that J(V c , W H ), as the join of two disjoint projective subvarieties, is again a projective subvariety of P m [Har92, Example 6.17].Consider P ∈ V c and Q ∈ W H .If P ∈ H, the line P Q lies in H and does not meet A m .Now suppose Conversely, let P ∈ V .Let W P denote the unique translate of W containing P .Let ℓ P be an affine line contained in W P and passing through P (note that W P is the union of such lines).Then ℓ P is a translate of an affine line ℓ ⊆ W .As ℓ P and ℓ are translates of each other, their projective closures intersect H at the same point Q.We have We claim that ).We prove the other direction by comparing dimensions.It is known that for two affine varieties of the same dimension, and one is contained in the other.So they must be equal.This proves the claim. As Remark-The converse of Lemma 18 (Condition 2) is false; see Example 3 in the appendix.
Error probability.It remains to bound the probability of failure of the conditions V c ∩ W H = ∅ and (in the case O ∈ V ) V c ∩ W c = ∅ in Lemma 18.We need the following lemma.
Lemma 19 (Cut by hyperplanes).Let V ⊆ P m be a projective subvariety of dimension r and degree d.Let r ′ ≥ r + 1. Choose c i,j ∈ S at random, for i ∈ [r ′ ] and 0 ≤ j ≤ m.Let W ⊆ P m be the projective subspace cut out by the equations m j=0 c i,j X j = 0, i = 1, . . ., r ′ , where X 0 , . . ., X m are homogeneous coordinates of P m .Then V ∩ W = ∅ holds with probability at least 1 − (r + 1)d/|S|.Proof.For i ∈ [r ′ ], let H i ⊆ P m be the hyperplane defined by m j=0 c i,j X j = 0.By ignoring H i for i > r + 1, we may assume r ′ = r + 1.Let V 0 := V and V i := V i−1 ∩ H i for i ∈ [r ′ ].It suffices to show that dim V i = dim V i−1 − 1 holds with probability at least 1 − d/|S|, for each i ∈ [r ′ ] (the dimension of the empty set is −1 by convention).
Fix i ∈ [r ′ ] and c i ′ ,j , for i ′ ∈ [i − 1] and 0 ≤ j ≤ m.So V i−1 is also fixed.Note that V i−1 = ∅ since by taking a hyperplane section reduces the dimension by at most one.If dim , and H i contains some irreducible component of V i−1 [Har92,Exercise 11.6].Let Y be an irreducible component of V i−1 , and fix a point P ∈ Y .Then Y ⊆ H i only if P ∈ H i , which holds only if c i,0 , . . ., c i,m satisfy a nonzero linear equation determined by P .This occurs with probability at most 1/|S| (eg.by fixing all but one c i,j ).We also have deg and hence the number of irreducible components of V i−1 is bounded by d.By the union bound, H i contains an irreducible component of V i−1 with probability at most d/|S|.
Proof of Theorem 17.As mentioned above, Theorem 17 is equivalent to showing that, with probability at least 1 − δ: (1) dim V ′ = k, and (2) O ′ ∈ V ′ iff O ∈ V .Note that W c is cut out in P m by the linear equations m j=1 c i,j X j = 0, i = 1, . . ., k + 1.So W H is cut out in H ∼ = P m−1 (corresponding to X 0 = 0) by the linear equations m j=1 c i,j X j = 0, i = 1, . . ., k + 1.We also have deg Applying Lemma 19 to each of the irreducible components of V c ∩ H and W H , as subvarieties of H ∼ = P m−1 , we see [Har92,Eg.18.16].Applying Lemma 19 to π O,H (V c ) and W H , as subvarieties of H ∼ = P m−1 , we see π O,H (V c ) ∩ W H = ∅ holds with probability at least 1 By Lemma 18 and the previous paragraphs, it holds with probability at least 1−δ that dim Proof of Theorem 2. AnnAtZero is known to be NP-hard [Kay09].The NP-hardness of APS follows from Lemma 13 and Theorem 14.
Given an instance f of APS, we can first find the trdeg k.Fix a subset S ⊂ A to be larger than 2(k + 1)(max i∈[m] deg(f i )) k (which can be scanned using only polynomial-space).Consider the points ((c i,j Compute the trdeg of g, and if it is k then solve AnnAtZero for the instance g.Output NO iff some g failed the AnnAtZero test.
All these steps can be achieved in space polynomial in the input size, using the uniqueness of the annihilator for g [Kay09, Lem.7], Perron's degree bound [P lo05] and linear algebra [Csa76].
5 Hitting-set for VP: Proof of Theorem 3 Suppose p is a prime.Define A := F p .We want to find hitting-sets for certain polynomials in A[x 1 , . . ., x n ].Fix a p-power q ≥ Ω(sr 6 ), for the given parameters s, r.Assume that p ∤ (r +1).Also, fix a model for the finite field F q [AL86].We now define the notion of 'infinitesimally approximating' a polynomial by a small circuit.
Approximative closure of VP. [BIZ17] A family (f n |n) of polynomials from A[x] is in the class VP A if there are polynomials f n,i and a function t : N → N such that g n has a poly(n)-size poly(n)degree algebraic circuit, over the field A(ε), computing g n The smallest possible circuit size of g n is called the approximative complexity of f n , namely size(f n ).
It may happen that g n is much easier than f n in terms of traditional circuit complexity.That possibility makes the definition interesting and opens up a long line of research.
Hitting-set for VP A .Given functions s = s(n) and r = r(n), a finite subset H ⊂ A n is called a hitting-set for degree-r polynomials of approximative complexity s, if for every such nonzero polynomial f : ∃v ∈ H, f (v) = 0.
Note that by [HS80] there is a hitting-set, with m := O(s ′2 n 2 ) points in F n q (∵ q ≥ Ω(s ′ r 2 )), for the set of polynomials P approximated by the specializations of Ψ(y, x).A universal circuit construction can be found in [Raz08,SY10].Using the above notation, we give a criterion to decide whether a candidate set is a hitting-set.
Theorem 21 (hs criterion).Set H =: {v 1 , . . ., v m } ⊂ F n q is not a hitting-set for the family of polynomials P iff there is a satisfying assignment (α, Remark-The above criterion holds for algebraically closed fields A of any characteristic.Thus, it reduces those hitting-set design problems to APS as well. Proof.First we show that: ∃x ∈ A(ε), Recall the formal power series A[[ε]] and its group of units A[[ε]] * .Note that for any polynomial a = ( i 0 ≤i≤d a i ε i ) with a i 0 = 0, the inverse a This is just a consequence of the identity (1−ε) −1 = i≥0 ε i .In other words, any rational function a ∈ A(ε) can be written as an element in ε For this to be in εA[ε], clearly i has to be 0 (otherwise, ε −i(r+1) remains uncancelled); implying that Moreover, we deduce that b r+1 0 −1 = 0. Thus, condition (1) implies that b 0 is one of the (r +1)-th roots of unity Z r+1 ⊂ A (recall that, since p ∤ (r + 1), |Z r+1 | = r + 1).Thus, x ∈ Z r+1 + εA [[ε]].
[⇒]: Suppose H is not a hitting-set for P.Then, there is a specialization α ∈ A(ε) s ′ of the universal circuit such that Ψ(α, x) computes a polynomial in A What remains to show is that conditions (1) and (2) can be satisfied too.
Note that the normalized circuit ε ℓ • Ψ(α, x) equals g −ℓ at ε = 0.This means that g −ℓ ∈ P, and it is a nonzero polynomial fooling H. Thus, H cannot be a hitting-set for P and we are done.
For every subset H =: {v 1 , . . ., v m } ⊂ F n q solve the APS instance described by Conditions (1)-(3) in Theorem 21.These are (n + m + 1) algebraic circuits of degree poly(srn, log p) and a similar bitsize.Using the algorithm from Theorem 2 it can be solved in poly(srn, log p)-space.
The number of subsets H is q nm .So, in poly(nm log q)-space we can go over all of them.If APS fails on one of them (say H) then we know that H is a hitting-set for P. Since Ψ is universal, for homogeneous degree-r size-s polynomials in A[x], we output H as the desired hitting-set.

Conclusion
Our result on algebraic dependence testing in AM ∩ coAM gives further indication that a randomized polynomial time algorithm for the problem exists.Studying the following special case might be helpful to get an idea for designing better algorithms.
As indicated in this paper, approximate polynomials satisfiability, or equivalently testing zeromembership in the Zariski closure of the image, may have further applications to problems in computational algebraic geometry and algebraic complexity.
We know that HN is in AM over characteristic zero fields, assuming GRH [Koi96].Can we solve AnnAtZero (or APS) in AM for characteristic zero fields assuming GRH?) [Kay09]?This would also imply better hitting-set construction for VP.
if Im(f ) = W .The preimage of a closed subset under a morphism is closed (i.e.morphisms are continuous in the Zariski topology).
For a polynomial map f : A n → A m and an affine variety V ⊆ A n , W := f (V ) is also an affine variety (i.e., it is irreducible).To see this, assume to the contrary that W is the union of two proper closed subsets W 1 and W 2 .By the definition of closure, f (V ) is not contained in either W 1 or W 2 , i.e., it intersects both.Then f −1 (W 1 ) ∩ V and f −1 (W 2 ) ∩ V are two proper closed subsets of V , and their union is V .This contradicts the irreducibility of V .
The graph Γ f of a morphism f is the set {(x, f (x)) : x ∈ V } ⊆ V × W ⊆ A n × A m .Here V × W = {(x, y) : x ∈ V, y ∈ W } denotes the product of V and W , which is a subvariety of the (n+m)-dimensional affine space A n ×A m ∼ = A n+m .Note the graph Γ f is closed in A n ×A m : Suppose f sends x ∈ V to (f 1 (x), . . ., f m (x)) ∈ A m , where f i ∈ A[X 1 , . . ., X n ] for i ∈ [m].And suppose V and W are defined by ideals I ⊆ A[X 1 , . . ., X n ] and I ′ ⊆ A[Y 1 , . . ., Y m ] respectively.Then Γ f is defined by I, I ′ , and the polynomials Y i − f i (X 1 , . . ., X n ) ∈ A[X 1 , . . ., X n , Y 1 , . . ., Y m ], i = 1, . . ., m.
Example 3. Consider Example 2 but choose f 4 to be X 1 + X 2 + 1 instead of X 1 + X 2 .Now we have e., Q is an annihilator for f .So Im(f ) = V (I), where the idealI ⊆ A[Y 1 , . . ., Y m ] consists of the annihilators for f .Also note that {O} = V (m), where m is the maximal ideal Y 1 , . . ., Y m .Let us study the condition O ∈ Im(f ).By the ideal-variety correspondence,{O} = V (m) ⊆ Im(f ) = V (I) is equivalent to I ⊆ m, i.e., Q mod m = 0 for Q ∈ I.But Q mod m isjust the constant term of the annihilator Q.Hence, we have the equivalence.
by Bézout's Theorem.By Lemma 15, there exists a curve C ⊆ A n such that O ∈ f (C) and deg(C) ≤ deg(Γ f ) ≤ D. Pick such a curve C. Apply Lemma 16 to C, f | C and O, and let p 1 , . . ., p n