Theory of Computing ------------------- Title : Private Learning and Sanitization: Pure vs. Approximate Differential Privacy Authors : Amos Beimel, Kobbi Nissim, and Uri Stemmer Volume : 12 Number : 1 Pages : 1-61 URL : https://theoryofcomputing.org/articles/v012a001 Abstract -------- We compare the sample complexity of private learning [Kasiviswanathan et al. 2008] and sanitization [Blum et al. 2008] under _pure_ $\epsilon$-differential privacy [Dwork et al. TCC 2006] and _approximate_ $(\epsilon,\delta)$-differential privacy [Dwork et al. Eurocrypt 2006]. We show that the sample complexity of these tasks under approximate differential privacy can be significantly lower than that under pure differential privacy. We define a family of optimization problems, which we call _Quasi- Concave Promise Problems_, that generalizes some of the tasks we consider. We observe that a quasi-concave promise problem can be privately approximated using a solution to a smaller instance of a quasi-concave promise problem. This allows us to construct an efficient recursive algorithm to solve such problems privately. Specifically, we construct private learners for point functions, threshold functions, and axis-aligned rectangles in high dimension. Similarly, we construct sanitizers for point functions and threshold functions. We also examine the sample complexity of _label-private_ learners, a relaxation of private learning where the learner is required to only protect the privacy of the labels in the sample. We show that the VC dimension completely characterizes the sample complexity of such learners, that is, the sample complexity of learning with label privacy is equal (up to constants) to learning without privacy. An extended abstract of this paper appeared in the Proceedings of the 17th International Workshop on Randomization and Computation, 2013.