Real-Time Network Slicing with Uncertain Demand: A Deep Learning Approach
- Publication Type:
- Conference Proceeding
- IEEE International Conference on Communications, 2019, 2019-May
- Issue Date:
|8A2D74CF-9626-4A31-AC9B-BBF07AE9E317 am combined.pdf||Accepted Manuscript Version||1.49 MB|
Copyright Clearance Process
- Recently Added
- In Progress
- Open Access
This item is currently unavailable due to the publisher's embargo.
The embargo period expires on 1 May 2021
© 2019 IEEE. Practical and efficient network slicing often faces real-time dynamics of network resources and uncertain customer demands. This work provides an optimal and fast resource slicing solution under such dynamics by leveraging the latest advances in deep learning. Specifically, we first introduce a novel system model which allows the network provider to effectively allocate its combinatorial resources, i.e., spectrum, computing, and storage, to various classes of users. To allocate resources to users while taking into account the dynamic demands of users and resources constraints of the network provider, we employ a semi-Markov decision process framework. To obtain the optimal resource allocation policy for the network provider without requiring environment parameters, e.g., uncertain service time and resource demands, a Q-learning algorithm is adopted. Although this algorithm can maximize the revenue of the network provider, its convergence to the optimal policy is particularly slow, especially for problems with large state/action spaces. To overcome this challenge, we propose a novel approach using an advanced deep Q-learning technique, called deep dueling that can achieve the optimal policy at few thousand times faster than that of the conventional Q-learning algorithm. Simulation results show that our proposed framework can improve the long-term average return of the network provider up to 40% compared with other current approaches.
Please use this identifier to cite or link to this item: