Increasing biases can be more efficient than increasing weights

January 03, 2023 Β· Declared Dead Β· πŸ› IEEE Workshop/Winter Conference on Applications of Computer Vision

πŸ’€ CAUSE OF DEATH: 404 Not Found
Code link is broken/dead
Authors Carlo Metta, Marco Fantozzi, Andrea Papini, Gianluca Amato, Matteo Bergamaschi, Silvia Giulia Galfrè, Alessandro Marchetti, Michelangelo Vegliò, Maurizio Parton, Francesco Morandin arXiv ID 2301.00924 Category cs.NE: Neural & Evolutionary Cross-listed cs.LG Citations 7 Venue IEEE Workshop/Winter Conference on Applications of Computer Vision Repository https://github.com/CuriosAI/dac-dev Last Checked 1 month ago
Abstract
We introduce a novel computational unit for neural networks that features multiple biases, challenging the traditional perceptron structure. This unit emphasizes the importance of preserving uncorrupted information as it is passed from one unit to the next, applying activation functions later in the process with specialized biases for each unit. Through both empirical and theoretical analyses, we show that by focusing on increasing biases rather than weights, there is potential for significant enhancement in a neural network model's performance. This approach offers an alternative perspective on optimizing information flow within neural networks. See source code at https://github.com/CuriosAI/dac-dev.
Community shame:
Not yet rated
Community Contributions

Found the code? Know the venue? Think something is wrong? Let us know!

πŸ“œ Similar Papers

In the same crypt β€” Neural & Evolutionary

R.I.P. πŸ‘» Ghosted

LSTM: A Search Space Odyssey

Klaus Greff, Rupesh Kumar Srivastava, ... (+3 more)

cs.NE πŸ› IEEE TNNLS πŸ“š 6.0K cites 11 years ago

Died the same way β€” πŸ’€ 404 Not Found