An Exploration of Spiking Neural Networks and their use on Reinforcement Learning Tasks

Publication Type:
Thesis
Issue Date:
2021
Full metadata record
Artificial neural networks have recently been the prominent architecture for reinforcement learning tasks. However, there is emerging evidence that spiking neural networks can perform just as well and can retain this performance across similar environments. Spiking neural networks are experiencing a surge in popularity due to their potential for large efficiency gains when compared to their traditional artificial neural network counterparts. Though, when attempting to replicate the successes of artificial neural networks, challenges are faced due to their vastly different architectures and therefore differing methods for training and optimisation. As spiking neural networks are considered more biologically plausible, methods of training inspired by natural learning have been proposed. These methods have been minimally applied to complex reinforcement learning domains, instead typically focusing on supervised learning problems. This thesis aims to explore the use of spiking neural networks in reinforcement learning domains. Methods of evolutionary and spike timing based training will be explored. Additionally, an in-depth analysis of different encoding and decoding methods is conducted. This research also addresses the trends in the effect of the time period that a state is exposed to a spiking neural network on the performance of the networks.
Please use this identifier to cite or link to this item: