Classification and Geometry of General Perceptual Manifolds
October 17, 2017 ยท Declared Dead ยท ๐ Physical Review X
"No code URL or promise found in abstract"
Evidence collected by the PWNC Scanner
Authors
SueYeon Chung, Daniel D. Lee, Haim Sompolinsky
arXiv ID
1710.06487
Category
cond-mat.dis-nn
Cross-listed
cond-mat.stat-mech,
cs.NE,
q-bio.NC,
stat.ML
Citations
174
Venue
Physical Review X
Last Checked
1 month ago
Abstract
Perceptual manifolds arise when a neural population responds to an ensemble of sensory signals associated with different physical features (e.g., orientation, pose, scale, location, and intensity) of the same perceptual object. Object recognition and discrimination requires classifying the manifolds in a manner that is insensitive to variability within a manifold. How neuronal systems give rise to invariant object classification and recognition is a fundamental problem in brain theory as well as in machine learning. Here we study the ability of a readout network to classify objects from their perceptual manifold representations. We develop a statistical mechanical theory for the linear classification of manifolds with arbitrary geometry revealing a remarkable relation to the mathematics of conic decomposition. Novel geometrical measures of manifold radius and manifold dimension are introduced which can explain the classification capacity for manifolds of various geometries. The general theory is demonstrated on a number of representative manifolds, including L2 ellipsoids prototypical of strictly convex manifolds, L1 balls representing polytopes consisting of finite sample points, and orientation manifolds which arise from neurons tuned to respond to a continuous angle variable, such as object orientation. The effects of label sparsity on the classification capacity of manifolds are elucidated, revealing a scaling relation between label sparsity and manifold radius. Theoretical predictions are corroborated by numerical simulations using recently developed algorithms to compute maximum margin solutions for manifold dichotomies. Our theory and its extensions provide a powerful and rich framework for applying statistical mechanics of linear classification to data arising from neuronal responses to object stimuli, as well as to artificial deep networks trained for object recognition tasks.
Community Contributions
Found the code? Know the venue? Think something is wrong? Let us know!
๐ Similar Papers
In the same crypt โ cond-mat.dis-nn
R.I.P.
๐ป
Ghosted
R.I.P.
๐ป
Ghosted
Mutual Information, Neural Networks and the Renormalization Group
R.I.P.
๐ป
Ghosted
Machine learning meets network science: dimensionality reduction for fast and efficient embedding of networks in the hyperbolic space
R.I.P.
๐ป
Ghosted
The jamming transition as a paradigm to understand the loss landscape of deep neural networks
R.I.P.
๐ป
Ghosted
Criticality in Formal Languages and Statistical Physics
R.I.P.
๐ป
Ghosted
Simplicial complexes: higher-order spectral dimension and dynamics
Died the same way โ ๐ป Ghosted
R.I.P.
๐ป
Ghosted
Language Models are Few-Shot Learners
R.I.P.
๐ป
Ghosted
PyTorch: An Imperative Style, High-Performance Deep Learning Library
R.I.P.
๐ป
Ghosted
XGBoost: A Scalable Tree Boosting System
R.I.P.
๐ป
Ghosted