Incentive Mechanism Design of Federated Learning for Recommendation Systems in MEC

Publisher:
Institute of Electrical and Electronics Engineers (IEEE)
Publication Type:
Journal Article
Citation:
IEEE Transactions on Consumer Electronics, 2023, PP, (99), pp. 1-1
Issue Date:
2023-01-01
Full metadata record
With the rapid development of consumer electronics and communication technology, a large amount of data is generated from end users at the edge of the networks. Modern recommendation systems take full advantage of such data for training their various artificial intelligence (AI) models. However, traditional centralized model training has to transmit all the data to the cloud-based servers, which suffers from privacy leakage and resource shortage. Therefore, mobile edge computing (MEC) combined with federated learning (FL) is considered as a promising paradigm to address these issues. The smart devices can provide data and computing resources for the FL and transmit the local model parameters to the base station (BS) equipped with edge servers to aggregate into a global model. Nevertheless, due to the limited physical resources and the risk of privacy leakage, the users (the owners of the devices) would not like to participate in FL voluntarily. To address this issue, we take advantage of game theory to propose an incentive mechanism based on the two-stage Stackelberg game to inspire users to contribute computing resources for FL. We define two utility functions for the users and the BS, and formulate the utility maximization problem. Through theoretical analysis, we obtain the Nash equilibrium strategy of the users and the Stackelberg equilibrium of the utility maximization problem. Furthermore, we propose a game-based incentive mechanism algorithm (GIMA) to achieve the Stackelberg equilibrium. Finally, simulation results are provided to verify the performance of our GIMA algorithm. The experimental results show that our GIMA algorithm converges quickly, and can achieve higher utility value compared to other incentive methods.
Please use this identifier to cite or link to this item: