Field |
Value |
Language |
dc.contributor.author |
Huang, W |
|
dc.contributor.author |
Du, W |
|
dc.contributor.author |
Xu, RYD |
|
dc.date |
2021-08-19 |
|
dc.date.accessioned |
2022-07-05T03:23:21Z |
|
dc.date.available |
2022-07-05T03:23:21Z |
|
dc.date.issued |
2021-08-01 |
|
dc.identifier.citation |
Proceedings of the Thirtieth International Joint Conference on Artificial Intelligence, 2021, pp. 2577-2583 |
|
dc.identifier.uri |
http://hdl.handle.net/10453/158656
|
|
dc.description.abstract |
<jats:p>The prevailing thinking is that orthogonal weights are crucial to enforcing dynamical isometry and speeding up training. The increase in learning speed that results from orthogonal initialization in linear networks has been well-proven. However, while the same is believed to also hold for nonlinear networks when the dynamical isometry condition is satisfied, the training dynamics behind this contention have not been thoroughly explored. In this work, we study the dynamics of ultra-wide networks across a range of architectures, including Fully Connected Networks (FCNs) and Convolutional Neural Networks (CNNs) with orthogonal initialization via neural tangent kernel (NTK). Through a series of propositions and lemmas, we prove that two NTKs, one corresponding to Gaussian weights and one to orthogonal weights, are equal when the network width is infinite. Further, during training, the NTK of an orthogonally-initialized infinite-width network should theoretically remain constant. This suggests that the orthogonal initialization cannot speed up training in the NTK (lazy training) regime, contrary to the prevailing thoughts. In order to explore under what circumstances can orthogonality accelerate training, we conduct a thorough empirical investigation outside the NTK regime. We find that when the hyper-parameters are set to achieve a linear regime in nonlinear activation, orthogonal initialization can improve the learning speed with a large learning rate or large depth.</jats:p> |
|
dc.language |
en |
|
dc.publisher |
International Joint Conferences on Artificial Intelligence Organization |
|
dc.relation.ispartof |
Proceedings of the Thirtieth International Joint Conference on Artificial Intelligence |
|
dc.relation.ispartof |
Proceedings of the Thirtieth International Joint Conference on Artificial Intelligence |
|
dc.relation.isbasedon |
10.24963/ijcai.2021/355 |
|
dc.rights |
info:eu-repo/semantics/openAccess |
|
dc.title |
On the Neural Tangent Kernel of Deep Networks with Orthogonal Initialization |
|
dc.type |
Conference Proceeding |
|
pubs.organisational-group |
/University of Technology Sydney |
|
pubs.organisational-group |
/University of Technology Sydney/Faculty of Engineering and Information Technology |
|
pubs.organisational-group |
/University of Technology Sydney/Strength - INEXT - Innovation in IT Services and Applications |
|
pubs.organisational-group |
/University of Technology Sydney/Strength - GBDTC - Global Big Data Technologies |
|
pubs.organisational-group |
/University of Technology Sydney/Faculty of Engineering and Information Technology/School of Electrical and Data Engineering |
|
utslib.copyright.status |
open_access |
* |
dc.date.updated |
2022-07-05T03:23:20Z |
|
pubs.finish-date |
2021-08-27 |
|
pubs.publication-status |
Published |
|
pubs.start-date |
2021-08-19 |
|