Dynamic choice of state abstraction in Q-learning

Publication Type:
Conference Proceeding
Citation:
Frontiers in Artificial Intelligence and Applications, 2016, 285 pp. 46 - 54
Issue Date:
2016-01-01
Filename Description Size
FAIA285-0046.pdfPublished version1.04 MB
Adobe PDF
Full metadata record
© 2016 The Authors and IOS Press. Q-learning associates states and actions of a Markov Decision Process to expected future reward through online learning. In practice, however, when the state space is large and experience is still limited, the algorithm will not find a match between current state and experience unless some details describing states are ignored. On the other hand, reducing state information affects long term performance because decisions will need to be made on less informative inputs. We propose a variation of Q-learning that gradually enriches state descriptions, after enough experience is accumulated. This is coupled with an ad-hoc exploration strategy that aims at collecting key information that allows the algorithm to enrich state descriptions earlier. Experimental results obtained by applying our algorithm to the arcade game Pac-Man show that our approach significantly outperforms Q-learning during the learning process while not penalizing long-term performance.
Please use this identifier to cite or link to this item: