DropNeuron: Simplifying the Structure of Deep Neural Networks

June 23, 2016 ยท Entered Twilight ยท ๐Ÿ› arXiv.org

๐ŸŒ… TWILIGHT: Old Age
Predates the code-sharing era โ€” a pioneer of its time

"Last commit was 9.0 years ago (โ‰ฅ5 year threshold)"

Evidence collected by the PWNC Scanner

Repo contents: README.md, __init__.py, alexnet.py, autoencoder.py, convnet.py, input_data.py, lenet-300-100.py, lenet-5.py, regression.py, regularizers.py, result

Authors Wei Pan, Hao Dong, Yike Guo arXiv ID 1606.07326 Category cs.CV: Computer Vision Cross-listed cs.LG, stat.ML Citations 39 Venue arXiv.org Repository https://github.com/panweihit/DropNeuron โญ 59 Last Checked 1 month ago
Abstract
Deep learning using multi-layer neural networks (NNs) architecture manifests superb power in modern machine learning systems. The trained Deep Neural Networks (DNNs) are typically large. The question we would like to address is whether it is possible to simplify the NN during training process to achieve a reasonable performance within an acceptable computational time. We presented a novel approach of optimising a deep neural network through regularisation of net- work architecture. We proposed regularisers which support a simple mechanism of dropping neurons during a network training process. The method supports the construction of a simpler deep neural networks with compatible performance with its simplified version. As a proof of concept, we evaluate the proposed method with examples including sparse linear regression, deep autoencoder and convolutional neural network. The valuations demonstrate excellent performance. The code for this work can be found in http://www.github.com/panweihit/DropNeuron
Community shame:
Not yet rated
Community Contributions

Found the code? Know the venue? Think something is wrong? Let us know!

๐Ÿ“œ Similar Papers

In the same crypt โ€” Computer Vision