Private Stochastic Non-Convex Optimization: Adaptive Algorithms and Tighter Generalization Bounds [preprint]

Preprint date

June 24, 2020

Authors

Yingxue Zhou (Ph.D. student), Xiangyi Chen, Mingyi Hong, Zhiwei Steven Wu (adjunct assistant professor), Arindam Banerjee (adjunct professor)

Abstract

We study differentially private (DP) algorithms for stochastic non-convex optimization. In this problem, the goal is to minimize the population loss over a p-dimensional space given n i.i.d. samples drawn from a distribution. We improve upon the population gradient bound of p–√/n−−√ from prior work and obtain a sharper rate of p–√4/n−−√. We obtain this rate by providing the first analyses on a collection of private gradient-based methods, including adaptive algorithms DP RMSProp and DP Adam. Our proof technique leverages the connection between differential privacy and adaptive data analysis to bound gradient estimation error at every iterate, which circumvents the worse generalization bound from the standard uniform convergence argument. Finally, we evaluate the proposed algorithms on two popular deep learning tasks and demonstrate the empirical advantages of DP adaptive gradient methods over standard DP SGD.

Link to full paper

Private Stochastic Non-Convex Optimization: Adaptive Algorithms and Tighter Generalization Bounds

Keywords

machine learning, cryptography, security

Share