Differential Privacy in Reinforcement Learning

Publication Type:
Thesis
Issue Date:
2022
Full metadata record
Reinforcement learning is a principled AI framework for autonomously experience-driven learning. The primary goal of reinforcement learning is to train autonomous agents to learn the optimal behaviors for their interactive environments. Deep reinforcement learning promotes a higher-level understanding of the visual world in the field of reinforcement learning by combining deep learning models and reinforcement learning algorithms. Since reinforcement learning is achieving great success in an increasing number of application fields that may involve huge amounts of private information, the security of policies and privacy preservation in reinforcement learning have given rise to widespread concerns. In addition, deep reinforcement learning policies parameterized by neural networks have been demonstrated to be vulnerable to adversarial attacks in supervised learning settings. Privacy leakage also occurs in multi-agent reinforcement learning systems where agents’ actions or behaviors are directly exposed to other agents. To address these multiple privacy concerns in reinforcement learning, we apply differential privacy in variant scenarios of reinforcement learning. In this thesis, we introduce our differentially private methods in those diverse scenarios to preserve privacy, including the multi-agent advising framework, multi-agent planning framework, the deep reinforcement learning context, machine learning classifiers and multi-agent game theoretic framework, respectively. We have provided detailed theoretical analysis and comprehensive experimental results to demonstrate that our methods can guarantee privacy preservation as well as the utility of reinforcement learning in diverse scenario in different chapters.
Please use this identifier to cite or link to this item: