Neural Networks Regularization Through Class-wise Invariant Representation Learning

September 06, 2017 Β· Entered Twilight Β· πŸ› arXiv.org

πŸŒ… TWILIGHT: Old Age
Predates the code-sharing era β€” a pioneer of its time

"Last commit was 8.0 years ago (β‰₯5 year threshold)"

Evidence collected by the PWNC Scanner

Repo contents: README.md, __init__.py, ae.py, basic_layer.py, config_yaml, data, dataset.py, exps, filterit.py, generate_exps.py, generate_exps_lenet.py, generate_exps_search.py, init_params, job0.sh, joblenet.sh, jobs, k80.sl, layer.py, layers.py, learning_rate.py, learning_rule.py, mnist_manip.py, non_linearities.py, normalization.py, outputjobs, p100.sl, plot_paper.py, submit.sh, tools.py, train3_bin.py, train3_new_dup.py, trainLenet.py

Authors Soufiane Belharbi, Clément Chatelain, Romain Hérault, Sébastien Adam arXiv ID 1709.01867 Category cs.LG: Machine Learning Cross-listed stat.ML Citations 10 Venue arXiv.org Repository https://github.com/sbelharbi/learning-class-invariant-features ⭐ 12 Last Checked 1 month ago
Abstract
Training deep neural networks is known to require a large number of training samples. However, in many applications only few training samples are available. In this work, we tackle the issue of training neural networks for classification task when few training samples are available. We attempt to solve this issue by proposing a new regularization term that constrains the hidden layers of a network to learn class-wise invariant representations. In our regularization framework, learning invariant representations is generalized to the class membership where samples with the same class should have the same representation. Numerical experiments over MNIST and its variants showed that our proposal helps improving the generalization of neural network particularly when trained with few samples. We provide the source code of our framework https://github.com/sbelharbi/learning-class-invariant-features .
Community shame:
Not yet rated
Community Contributions

Found the code? Know the venue? Think something is wrong? Let us know!

πŸ“œ Similar Papers

In the same crypt β€” Machine Learning