Auto-Generated Curriculum For Reinforcement Learning

Publication Type:
Thesis
Issue Date:
2023
Full metadata record
Our research investigates a curriculum that train reinforcement learning to improve its learning efficiency, generalization, and robustness in challenging tasks. We design auto-generated curriculums to train a Reinforcement Learning (RL) agent through a sequence of easy-to-hard subtasks, resulting in an RL policy that performs better in long-horizon, sparse-reward problems. Sparse reward reduces agent exploration feedback, reducing learning efficacy. Improving RL's generalization and environmental robustness is a major challenge. Thus, we propose an auto-generated curriculum to address RL's main issues. This approach improves RL policy training efficiency and promotes policy generalization to varied situations. The suggested architecture then adds adversarial alterations to the training environment to make the learned RL policy more adaptable to varied contexts. Even though RL can be used to many jobs and contexts, its practical use in engineering and research is crucial. However, improving the robot's structure for different environments remains unclear. This project integrates robot morphology optimization with RL policy training using the automatically developed curriculum. Our proposal explains how environment and morphology co-evolve. This entails changing the training setting as the agent's morphology evolves to adapt to different surroundings.
Please use this identifier to cite or link to this item: