Hierarchical Neural Architecture Search via Operator Clustering
September 26, 2019 ยท Entered Twilight ยท ๐ arXiv.org
"Last commit was 6.0 years ago (โฅ5 year threshold)"
Evidence collected by the PWNC Scanner
Repo contents: README.md, augment.py, augment_cnn.py, auto_augment.py, cells.py, cifar10.pth.tar, config.py, data_loader.py, feature.py, feature_map.py, genotypes.py, images, operators.py, run_all.py, search.py, search_cnn.py, test.py, utils.py
Authors
Guilin Li, Xing Zhang, Zitong Wang, Matthias Tan, Jiashi Feng, Zhenguo Li, Tong Zhang
arXiv ID
1909.11926
Category
cs.LG: Machine Learning
Cross-listed
cs.CV,
stat.ML
Citations
27
Venue
arXiv.org
Repository
https://github.com/susan0199/StacNAS
โญ 18
Last Checked
1 month ago
Abstract
Recently, the efficiency of automatic neural architecture design has been significantly improved by gradient-based search methods such as DARTS. However, recent literature has brought doubt to the generalization ability of DARTS, arguing that DARTS performs poorly when the search space is changed, i.e, when different set of candidate operators are used. Regularization techniques such as early stopping have been proposed to partially solve this problem. In this paper, we tackle this problem from a different perspective by identifying two contributing factors to the collapse of DARTS when the search space changes: (1) the correlation of similar operators incurs unfavorable competition among them and makes their relative importance score unreliable and (2) the optimization complexity gap between the proxy search stage and the final training. Based on these findings, we propose a new hierarchical search algorithm. With its operator clustering and optimization complexity match, the algorithm can consistently find high-performance architecture across various search spaces. For all the five variants of the popular cell-based search spaces, the proposed algorithm always obtains state-of-the-art architecture with best accuracy on the CIFAR-10, CIFAR-100 and ImageNet over other well-established DARTS-alike algorithms. Code is available at https://github.com/susan0199/StacNAS.
Community Contributions
Found the code? Know the venue? Think something is wrong? Let us know!
๐ Similar Papers
In the same crypt โ Machine Learning
R.I.P.
๐ป
Ghosted
R.I.P.
๐ป
Ghosted
XGBoost: A Scalable Tree Boosting System
R.I.P.
๐ป
Ghosted
Batch Normalization: Accelerating Deep Network Training by Reducing Internal Covariate Shift
R.I.P.
๐ป
Ghosted
Semi-Supervised Classification with Graph Convolutional Networks
R.I.P.
๐ป
Ghosted
Proximal Policy Optimization Algorithms
R.I.P.
๐ป
Ghosted