Privacy-Preserving Stochastic Gradual Learning

Publisher:
IEEE COMPUTER SOC
Publication Type:
Journal Article
Citation:
IEEE Transactions on Knowledge and Data Engineering, 2021, 33, (8), pp. 3129-3140
Issue Date:
2021-08-01
Filename Description Size
Privacy-Preserving Stochastic Gradual Learning.pdfPublished version1.01 MB
Adobe PDF
Full metadata record
It is challenging for stochastic optimization to handle large-scale sensitive data safely. Duchi et al. recently proposed a private sampling strategy to solve privacy leakage in stochastic optimization. However, this strategy leads to a degeneration in robustness, since this strategy is equal to noise injection on each gradient, which adversely affects updates of the primal variable. To address this challenge, we introduce a robust stochastic optimization under the framework of local privacy, which is called Privacy-pREserving StochasTIc Gradual lEarning (PRESTIGE). PRESTIGE bridges private updates of the primal variable (by private sampling) with gradual curriculum learning (CL). The noise injection leads to similar issue from label noise, but the robust learning process of CL can combat with label noise. Thus, PRESTIGE yields 'private but robust' updates of the primal variable on the curriculum, that is, a reordered label sequence provided by CL. In theory, we reveal the convergence rate and maximum complexity of PRESTIGE. Empirical results on six datasets show that PRESTIGE achieves a good tradeoff between privacy preservation and robustness over baselines.
Please use this identifier to cite or link to this item: