Privacy-Preserving in Double Deep-Q-Network with Differential Privacy in Continuous Spaces
- Publisher:
- Springer Nature
- Publication Type:
- Conference Proceeding
- Citation:
- Lecture Notes in Computer Science (including subseries Lecture Notes in Artificial Intelligence and Lecture Notes in Bioinformatics), 2022, 13151 LNAI, pp. 15-26
- Issue Date:
- 2022-01-01
Closed Access
Filename | Description | Size | |||
---|---|---|---|---|---|
978-3-030-97546-3_2.pdf | Published version | 1.75 MB |
Copyright Clearance Process
- Recently Added
- In Progress
- Closed Access
This item is closed access and not available.
With extensive applications and remarkable performance, deep reinforcement learning is becoming one of the most important technologies that researchers have been focusing on. Many applications have used reinforcement learning, such as robotics, recommendation systems, and healthcare systems. These systems could collect data about the environment or users, which may contain sensitive information that posed a real risk when these data were disclosed. In this work, we aim to preserve the privacy of the data used in deep reinforcement learning with Double Deep-Q-Network in continuous space by adopting the differentially private SGD method to inject a noise to the gradient. In our experiment, we used a different amount of noise on two separate settings to demonstrate how effective of using this method.
Please use this identifier to cite or link to this item: