Rapid DP Convex Optimization via Curvature-Aware (Second-Order) Algorithms
Abstract
Differentially private gradient-based methods, especially (stochastic) gradient descent, serve as the backbone of privacy-preserving machine learning across convex and non-convex frameworks. In standard (non-private) optimization, second-order approaches—such as Newton-type algorithms—typically achieve faster convergence than first-order counterparts. This paper explores how curvature information from the loss function can be leveraged to accelerate differentially private convex optimization. I introduce a privacy-preserving adaptation of the regularized cubic Newton algorithm originally proposed by Nesterov and Polyak (2006), proving that for strongly convex loss functions, our approach guarantees quadratic convergence and reaches the optimal excess risk bound. Furthermore, we design a practical second-order private optimization procedure tailored for unconstrained logistic regression. Both theoretical and empirical evaluations confirm the superior efficiency of our method. Experimental outcomes demonstrate consistent improvements in excess loss compared to leading baselines, achieving a – speedup over DP-GD and DP-SGD on complex benchmark datasets.


