Addressing A Posteriori Performance Degradation in Neural Network Subgrid Stress Models
November 21, 2025 ยท Declared Dead ยท ๐ arXiv.org
"No code URL or promise found in abstract"
Evidence collected by the PWNC Scanner
Authors
Andy Wu, Sanjiva K. Lele
arXiv ID
2511.17475
Category
physics.flu-dyn
Cross-listed
cs.LG
Citations
0
Venue
arXiv.org
Last Checked
1 month ago
Abstract
Neural network subgrid stress models often have a priori performance that is far better than the a posteriori performance, leading to neural network models that look very promising a priori completely failing in a posteriori Large Eddy Simulations (LES). This performance gap can be decreased by combining two different methods, training data augmentation and reducing input complexity to the neural network. Augmenting the training data with two different filters before training the neural networks has no performance degradation a priori as compared to a neural network trained with one filter. A posteriori, neural networks trained with two different filters are far more robust across two different LES codes with different numerical schemes. In addition, by ablating away the higher order terms input into the neural network, the a priori versus a posteriori performance changes become less apparent. When combined, neural networks that use both training data augmentation and a less complex set of inputs have a posteriori performance far more reflective of their a priori evaluation.
Community Contributions
Found the code? Know the venue? Think something is wrong? Let us know!
๐ Similar Papers
In the same crypt โ physics.flu-dyn
R.I.P.
๐ป
Ghosted
R.I.P.
๐ป
Ghosted
Efficient collective swimming by harnessing vortices through deep reinforcement learning
R.I.P.
๐ป
Ghosted
NVIDIA SimNet^{TM}: an AI-accelerated multi-physics simulation framework
R.I.P.
๐ป
Ghosted
Teaching the Incompressible Navier-Stokes Equations to Fast Neural Surrogate Models in 3D
R.I.P.
๐ป
Ghosted
Prediction of Reynolds Stresses in High-Mach-Number Turbulent Boundary Layers using Physics-Informed Machine Learning
R.I.P.
๐ป
Ghosted
From Deep to Physics-Informed Learning of Turbulence: Diagnostics
Died the same way โ ๐ป Ghosted
R.I.P.
๐ป
Ghosted
Language Models are Few-Shot Learners
R.I.P.
๐ป
Ghosted
PyTorch: An Imperative Style, High-Performance Deep Learning Library
R.I.P.
๐ป
Ghosted
XGBoost: A Scalable Tree Boosting System
R.I.P.
๐ป
Ghosted