Visualizing the Loss Landscape of Neural Nets

December 28, 2017 ยท Entered Twilight ยท ๐Ÿ› Neural Information Processing Systems

๐ŸŒ… TWILIGHT: Old Age
Predates the code-sharing era โ€” a pioneer of its time

"Last commit was 5.0 years ago (โ‰ฅ5 year threshold)"

Evidence collected by the PWNC Scanner

Repo contents: .gitignore, LICENSE, README.md, cifar10, dataloader.py, doc, evaluation.py, h52vtp.py, h5_util.py, hess_vec_prod.py, model_loader.py, mpi4pytorch.py, net_plotter.py, plot_1D.py, plot_2D.py, plot_hessian_eigen.py, plot_surface.py, plot_trajectory.py, projection.py, scheduler.py, script

Authors Hao Li, Zheng Xu, Gavin Taylor, Christoph Studer, Tom Goldstein arXiv ID 1712.09913 Category cs.LG: Machine Learning Cross-listed cs.CV, stat.ML Citations 2.2K Venue Neural Information Processing Systems Repository https://github.com/tomgoldstein/loss-landscape โญ 3155 Last Checked 1 month ago
Abstract
Neural network training relies on our ability to find "good" minimizers of highly non-convex loss functions. It is well-known that certain network architecture designs (e.g., skip connections) produce loss functions that train easier, and well-chosen training parameters (batch size, learning rate, optimizer) produce minimizers that generalize better. However, the reasons for these differences, and their effects on the underlying loss landscape, are not well understood. In this paper, we explore the structure of neural loss functions, and the effect of loss landscapes on generalization, using a range of visualization methods. First, we introduce a simple "filter normalization" method that helps us visualize loss function curvature and make meaningful side-by-side comparisons between loss functions. Then, using a variety of visualizations, we explore how network architecture affects the loss landscape, and how training parameters affect the shape of minimizers.
Community shame:
Not yet rated
Community Contributions

Found the code? Know the venue? Think something is wrong? Let us know!

๐Ÿ“œ Similar Papers

In the same crypt โ€” Machine Learning