Obfuscating the Dataset: Impacts and Applications

Publisher:
ASSOC COMPUTING MACHINERY
Publication Type:
Journal Article
Citation:
ACM Transactions on Intelligent Systems and Technology, 2023, 14, (5)
Issue Date:
2023-09-30
Filename Description Size
3597936.pdfPublished version3.1 MB
Adobe PDF
Full metadata record
Obfuscating a dataset by adding random noises to protect the privacy of sensitive samples in the training dataset is crucial to prevent data leakage to untrusted parties when dataset sharing is essential. We conduct comprehensive experiments to investigate how the dataset obfuscation can affect the resultant model weights - in terms of the model accuracy, ℓ2-distance-based model distance, and level of data privacy - and discuss the potential applications with the proposed Privacy, Utility, and Distinguishability (PUD)-triangle diagram to visualize the requirement preferences. Our experiments are based on the popular MNIST and CIFAR-10 datasets under both independent and identically distributed (IID) and non-IID settings. Significant results include a tradeoff between the model accuracy and privacy level and a tradeoff between the model difference and privacy level. The results indicate broad application prospects for training outsourcing and guarding against attacks in federated learning both of which have been increasingly attractive in many areas, particularly learning in edge computing.
Please use this identifier to cite or link to this item: