DeepPlace: Deep reinforcement learning for adaptive flow rule placement in Software-Defined IoT Networks
- Publisher:
- ELSEVIER
- Publication Type:
- Journal Article
- Citation:
- Computer Communications, 2022, 181, pp. 156-163
- Issue Date:
- 2022-01-01
Closed Access
Filename | Description | Size | |||
---|---|---|---|---|---|
1-s2.0-S0140366421003789-main.pdf | Published version | 1.02 MB |
Copyright Clearance Process
- Recently Added
- In Progress
- Closed Access
This item is closed access and not available.
In this paper, we propose a novel and adaptive flow rule placement system based on deep reinforcement learning, namely DeepPlace, in Software-Defined Internet of Things (SDIoT) networks. DeepPlace can provide a fine-grained traffic analysis capability while assuring QoS of traffic flows and proactively avoiding the flow-table overflow issue in the data plane. Specifically, we first investigate the traffic forwarding process in an SDIoT network, i.e., routing and flow rule placement tasks. We design a cost function for the routing to set up traffic flow paths in the data plane. Next, we propose an adaptive flow rule placement approach to maximize the number of match-fields in a flow rule at SDN switches. To deal with the dynamics of IoT traffic flows, we model the system operation by using the Markov decision process (MDP) with a continuous action space and formulate its optimization problem. Subsequently, we develop a deep deterministic policy gradient-based algorithm to help the system obtain the optimal policy. The evaluation results demonstrate that DeepPlace can efficiently maintain a significant number of match-fields in a flow rule, i.e., approximately 86% of the maximum level, while minimizing the QoS violation ratio of traffic flows, i.e., 6.7%, in a highly dynamic traffic scenario, which outperforms three other existing solutions, i.e., FlowMan, FlowStat, and DeepMatch.
Please use this identifier to cite or link to this item: