Efficient and Reproducible Automated Deep Learning
- Publication Type:
- Thesis
- Issue Date:
- 2021
Open Access
Copyright Clearance Process
- Recently Added
- In Progress
- Open Access
This item is open access.
Deep learning has shown its power in a large number of applications, such as visual perception, language modeling, speech recognition, video games, etc. To deploy a deep learning model successfully, inevitable manual tuning is required for each component, such as neural architecture design, the choice of optimization strategy, data selection, augmentation, etc. Such manual tuning costs expensive computational resources and is labor-intensive. Moreover, this paradigm is not scalable when the model size or the data size significantly increases. Fortunately, AutoDL brings hope to alleviate this problem by making the tuning procedure automated. Despite the recent success of AutoDL, efficiency and reproducibility for AutoDL algorithms remain a tremendous challenge for the community.
In this thesis, we address this challenge in the following aspects. We comprehensively review the current state of AutoDL and set up six step-by-step objectives to further develop AutoDL. To achieve these objectives, we propose a series of efficient approaches to learning to search (1) neural architecture topology, (2) neural architecture size, and (3) hyperparameters by gradient descent. In addition to common empirical analysis on vision and NLP datasets, we build a systematical benchmark for neural architecture topology and neural architecture size. This benchmark aims to provide a fair and easy-to-use environment for our proposed algorithms as well as other AutoDL participants.
Please use this identifier to cite or link to this item: