Real-Time Network Slicing with Uncertain Demand: A Deep Learning Approach
- Publication Type:
- Conference Proceeding
- Citation:
- IEEE International Conference on Communications, 2019, 2019-May
- Issue Date:
- 2019-05-01
Open Access
Copyright Clearance Process
- Recently Added
- In Progress
- Open Access
This item is open access.
© 2019 IEEE. Practical and efficient network slicing often faces real-time dynamics of network resources and uncertain customer demands. This work provides an optimal and fast resource slicing solution under such dynamics by leveraging the latest advances in deep learning. Specifically, we first introduce a novel system model which allows the network provider to effectively allocate its combinatorial resources, i.e., spectrum, computing, and storage, to various classes of users. To allocate resources to users while taking into account the dynamic demands of users and resources constraints of the network provider, we employ a semi-Markov decision process framework. To obtain the optimal resource allocation policy for the network provider without requiring environment parameters, e.g., uncertain service time and resource demands, a Q-learning algorithm is adopted. Although this algorithm can maximize the revenue of the network provider, its convergence to the optimal policy is particularly slow, especially for problems with large state/action spaces. To overcome this challenge, we propose a novel approach using an advanced deep Q-learning technique, called deep dueling that can achieve the optimal policy at few thousand times faster than that of the conventional Q-learning algorithm. Simulation results show that our proposed framework can improve the long-term average return of the network provider up to 40% compared with other current approaches.
Please use this identifier to cite or link to this item: