Automatic Selection of Security Service Function Chaining Using Reinforcement Learning
- Publication Type:
- Conference Proceeding
- Citation:
- 2018 IEEE Globecom Workshops, GC Wkshps 2018 - Proceedings, 2019
- Issue Date:
- 2019-02-19
Closed Access
Filename | Description | Size | |||
---|---|---|---|---|---|
Automatic Selection of Security Service Function Chaining Using Reinforcement Learning.pdf | Published version | 314.61 kB |
Copyright Clearance Process
- Recently Added
- In Progress
- Closed Access
This item is closed access and not available.
© 2018 IEEE. When selecting security Service Function Chaining (SFC) for network defense, operators usually take security performance, service quality, deployment cost, and network function diversity into consideration, formulating as a multi-objective optimization problem. However, as applications, users, and data volumes grow massively in networks, traditional mathematical approaches cannot be applied to online security SFC selections due to high execution time and uncertainty of network conditions. Thus, in this paper, we utilize reinforcement learning, specifically, the Q-learning algorithm to automatically choose proper security SFC for various requirements. Particularly, we design a reward function to make a tradeoff among different objectives and modify the standard -greedy based exploration to pick out multiple ranked actions for diversified network defense. We compare the Q-learning with mathematical optimization-based approaches, which are assumed to know network state changes in advance. The training and testing results show that the Q-learning based approach can capture changes of network conditions and make a tradeoff among different objectives.
Please use this identifier to cite or link to this item: