Learning Neural Activations
December 27, 2019 ยท Entered Twilight ยท ๐ arXiv.org
"Last commit was 6.0 years ago (โฅ5 year threshold)"
Evidence collected by the PWNC Scanner
Repo contents: Copy_of_cifar10_mobilenet.ipynb, README.md, Smoothness_analysis.ipynb, Toy_Problem.ipynb, mnist_actlearn.ipynb, xor_single_neuron.ipynb
Authors
Fayyaz ul Amir Afsar Minhas, Amina Asif
arXiv ID
1912.12187
Category
cs.LG: Machine Learning
Cross-listed
cs.NE,
stat.ML
Citations
2
Venue
arXiv.org
Repository
https://github.com/amina01/Learning-Neural-Activations
โญ 2
Last Checked
2 months ago
Abstract
An artificial neuron is modelled as a weighted summation followed by an activation function which determines its output. A wide variety of activation functions such as rectified linear units (ReLU), leaky-ReLU, Swish, MISH, etc. have been explored in the literature. In this short paper, we explore what happens when the activation function of each neuron in an artificial neural network is learned natively from data alone. This is achieved by modelling the activation function of each neuron as a small neural network whose weights are shared by all neurons in the original network. We list our primary findings in the conclusions section. The code for our analysis is available at: https://github.com/amina01/Learning-Neural-Activations.
Community Contributions
Found the code? Know the venue? Think something is wrong? Let us know!
๐ Similar Papers
In the same crypt โ Machine Learning
R.I.P.
๐ป
Ghosted
R.I.P.
๐ป
Ghosted
XGBoost: A Scalable Tree Boosting System
R.I.P.
๐ป
Ghosted
Batch Normalization: Accelerating Deep Network Training by Reducing Internal Covariate Shift
R.I.P.
๐ป
Ghosted
Semi-Supervised Classification with Graph Convolutional Networks
R.I.P.
๐ป
Ghosted
Proximal Policy Optimization Algorithms
R.I.P.
๐ป
Ghosted