Differential Privacy in Multi-agent Reinforcement Learning
- Publication Type:
- Thesis
- Issue Date:
- 2022
Open Access
Copyright Clearance Process
- Recently Added
- In Progress
- Open Access
This item is open access.
In the field of multi-agent reinforcement learning, agent advising by allowing agents to ask for or give advice to others has become an increasingly important topic as it can significantly promote agents' learning speed with negligible computation overheads. However, there are some critical challenging issues in agent advising, particularly performance and privacy issues. Differential privacy is a promising privacy-preserving model with several valuable properties. Its privacy-preserving property provides a provable guarantee to privacy protection while its randomisation property helps to resist the inference in machine learning schemes. Therefore, this thesis aims to explore the feasibility of adopting differential privacy mechanisms to resolve the two challenges in multi-agent reinforcement learning. In summary, this thesis consists of the following contributions:
• A differential advising method is proposed, which allows agents to use a piece of advice in various states.
• A differential knowledge transfer method is proposed, which stimulates the learning performance in a homogeneous multi-agent reinforcement learning system.
• A novel time-drive and privacy-preserving navigation learning scheme is proposed for multi-agent vehicular communication.
• A novel multi-agent reinforcement learning model that jointly adopts deep reinforcement learning and differential privacy is proposed for evolutionary game theory, which promotes cultivating more cooperators while protecting agents' sensitive information
Please use this identifier to cite or link to this item: