Lightweight Privacy-Preserving Training and Evaluation for Discretized Neural Networks

Publisher:
IEEE-INST ELECTRICAL ELECTRONICS ENGINEERS INC
Publication Type:
Journal Article
Citation:
IEEE Internet of Things Journal, 2020, 7, (4), pp. 2663-2678
Issue Date:
2020-04-01
Filename Description Size
08843956.pdfPublished version1.74 MB
Adobe PDF
Full metadata record
© 2014 IEEE. Machine learning, particularly the neural network (NN), is extensively exploited in dizzying applications. In order to reduce the burden of computing for resource-constrained clients, a large number of historical private datasets are required to be outsourced to the semi-trusted or malicious cloud for model training and evaluation. To achieve privacy preservation, most of the existing work either exploited the technique of public key fully homomorphic encryption (FHE) resulting in considerable computational cost and ciphertext expansion, or secure multiparty computation (SMC) requiring multiple rounds of interactions between user and cloud. To address these issues, in this article, a lightweight privacy-preserving model training and evaluation scheme LPTE for discretized NNs (DiNNs) is proposed. First, we put forward an efficient single key fully homomorphic data encapsulation mechanism (SFH-DEM) without exploiting public key FHE. Based on SFH-DEM, a series of atomic calculations over the encrypted domain, including multivariate polynomial, nonlinear activation function, gradient function, and maximum operations are devised as building blocks. Furthermore, a lightweight privacy-preserving model training and evaluation scheme LPTE for DiNNs is proposed, which can also be extended to convolutional NN. Finally, we give the formal security proofs for dataset privacy, model training privacy, and model evaluation privacy under the semi-honest environment and implement the experiment on real dataset MNIST for recognizing handwritten numbers in DiNN to demonstrate the high efficiency and accuracy of our proposed LPTE.
Please use this identifier to cite or link to this item: