Effect of dilution in asymmetric recurrent neural networks
May 10, 2018 ยท Declared Dead ยท ๐ Neural Networks
"No code URL or promise found in abstract"
Evidence collected by the PWNC Scanner
Authors
Viola Folli, Giorgio Gosti, Marco Leonetti, Giancarlo Ruocco
arXiv ID
1805.03886
Category
cond-mat.dis-nn
Cross-listed
cs.NE,
q-bio.NC
Citations
33
Venue
Neural Networks
Last Checked
1 month ago
Abstract
We study with numerical simulation the possible limit behaviors of synchronous discrete-time deterministic recurrent neural networks composed of N binary neurons as a function of a network's level of dilution and asymmetry. The network dilution measures the fraction of neuron couples that are connected, and the network asymmetry measures to what extent the underlying connectivity matrix is asymmetric. For each given neural network, we study the dynamical evolution of all the different initial conditions, thus characterizing the full dynamical landscape without imposing any learning rule. Because of the deterministic dynamics, each trajectory converges to an attractor, that can be either a fixed point or a limit cycle. These attractors form the set of all the possible limit behaviors of the neural network. For each network, we then determine the convergence times, the limit cycles' length, the number of attractors, and the sizes of the attractors' basin. We show that there are two network structures that maximize the number of possible limit behaviors. The first optimal network structure is fully-connected and symmetric. On the contrary, the second optimal network structure is highly sparse and asymmetric. The latter optimal is similar to what observed in different biological neuronal circuits. These observations lead us to hypothesize that independently from any given learning model, an efficient and effective biologic network that stores a number of limit behaviors close to its maximum capacity tends to develop a connectivity structure similar to one of the optimal networks we found.
Community Contributions
Found the code? Know the venue? Think something is wrong? Let us know!
๐ Similar Papers
In the same crypt โ cond-mat.dis-nn
R.I.P.
๐ป
Ghosted
R.I.P.
๐ป
Ghosted
Mutual Information, Neural Networks and the Renormalization Group
R.I.P.
๐ป
Ghosted
Machine learning meets network science: dimensionality reduction for fast and efficient embedding of networks in the hyperbolic space
R.I.P.
๐ป
Ghosted
Classification and Geometry of General Perceptual Manifolds
R.I.P.
๐ป
Ghosted
The jamming transition as a paradigm to understand the loss landscape of deep neural networks
R.I.P.
๐ป
Ghosted
Criticality in Formal Languages and Statistical Physics
Died the same way โ ๐ป Ghosted
R.I.P.
๐ป
Ghosted
Language Models are Few-Shot Learners
R.I.P.
๐ป
Ghosted
PyTorch: An Imperative Style, High-Performance Deep Learning Library
R.I.P.
๐ป
Ghosted
XGBoost: A Scalable Tree Boosting System
R.I.P.
๐ป
Ghosted