Learning Neural Activations

December 27, 2019 ยท Entered Twilight ยท ๐Ÿ› arXiv.org

๐ŸŒ… TWILIGHT: Old Age
Predates the code-sharing era โ€” a pioneer of its time

"Last commit was 6.0 years ago (โ‰ฅ5 year threshold)"

Evidence collected by the PWNC Scanner

Repo contents: Copy_of_cifar10_mobilenet.ipynb, README.md, Smoothness_analysis.ipynb, Toy_Problem.ipynb, mnist_actlearn.ipynb, xor_single_neuron.ipynb

Authors Fayyaz ul Amir Afsar Minhas, Amina Asif arXiv ID 1912.12187 Category cs.LG: Machine Learning Cross-listed cs.NE, stat.ML Citations 2 Venue arXiv.org Repository https://github.com/amina01/Learning-Neural-Activations โญ 2 Last Checked 2 months ago
Abstract
An artificial neuron is modelled as a weighted summation followed by an activation function which determines its output. A wide variety of activation functions such as rectified linear units (ReLU), leaky-ReLU, Swish, MISH, etc. have been explored in the literature. In this short paper, we explore what happens when the activation function of each neuron in an artificial neural network is learned natively from data alone. This is achieved by modelling the activation function of each neuron as a small neural network whose weights are shared by all neurons in the original network. We list our primary findings in the conclusions section. The code for our analysis is available at: https://github.com/amina01/Learning-Neural-Activations.
Community shame:
Not yet rated
Community Contributions

Found the code? Know the venue? Think something is wrong? Let us know!

๐Ÿ“œ Similar Papers

In the same crypt โ€” Machine Learning