Protect Trajectory Privacy in Food Delivery with Differential Privacy and Multi-agent Reinforcement Learning
- Publisher:
- Springer Nature
- Publication Type:
- Chapter
- Citation:
- Advanced Information Networking and Applications, 2023, 655 LNNS, pp. 48-59
- Issue Date:
- 2023-01-01
Closed Access
Filename | Description | Size | |||
---|---|---|---|---|---|
978-3-031-28694-0_5.pdf | 881.81 kB |
Copyright Clearance Process
- Recently Added
- In Progress
- Closed Access
This item is closed access and not available.
Today, multiple food delivery companies work globally in different regions, and this expansion could expose users’ data to danger. This data could be stored by a third party and could be used in further analysis. The stored data needs to be stored in a proper way to prevent any other from identifying the real data if this data is disclosed. This work considers this issue to maintain the data privacy of stored customer data by leveraging differential privacy and multi-agent reinforcement learning. In the beginning, the agent delivers the food to the customer. Then the agent constructs N of obfuscated trajectories with different privacy budgets. The multi-agent reinforcement learning then chooses one trajectory from the constructed trajectories. The trajectory is then evaluated by considering three factors: the similarity between the selected trajectory and the original trajectory, the sensitivity of destination location and the frequency of the number of orders by the customer. We implemented our experiment on meal delivery data sets in Iowa City, USA.
Please use this identifier to cite or link to this item: