Deep Reinforcement Learning for Artificial Intelligence-enabled Autonomous Penetration Testing in Cyber Security

Publication Type:
Thesis
Issue Date:
2022
Full metadata record
This research aims at understanding the technical challenges presented by an autonomous penetration testing application and developing novel Deep Reinforcement Learning (DRL) frameworks to deal with two problems of scalable autonomous penetration tester (PT), which involve the complexity of the action space and the state space. We re-formulate the conventional approach of using single-agent DRL into a multi-agent learning framework, enabling the decomposition of the complex and structured action space into manageable sub-modules each of which is controlled by a DRL agent. We introduced two new frameworks called Cascaded reinforcement learning agents for large discrete action space and multi-agent reinforcement learning for parameterised action space for each of the action space representations. To train reinforcement learning agents under a complex state space, we adapted the hierarchical reinforcement learning approach into a multi-agent system, wherein the high-level layer is trained to assign different sub-goals to the lower level, which in turn learns a primitive policy to achieve the given subgoals. The subgoal learning is facilitated by using the Successor Representation to obtain a state abstraction under environment with sparse or no reward. All the proposed approaches can be integrated end-to-end to develop an AI-enabled autonomous penetration testing application.
Please use this identifier to cite or link to this item: