An asynchronous incomplete block LU preconditioner for computational fluid dynamics on unstructured grids
December 02, 2019 ยท Declared Dead ยท ๐ SIAM Journal on Scientific Computing
"No code URL or promise found in abstract"
Evidence collected by the PWNC Scanner
Authors
Aditya Kashi, Siva Nadarajah
arXiv ID
1912.00539
Category
math.NA: Numerical Analysis
Cross-listed
cs.DC
Citations
5
Venue
SIAM Journal on Scientific Computing
Last Checked
1 month ago
Abstract
We present a study of the effectiveness of asynchronous incomplete LU factorization preconditioners for the time-implicit solution of compressible flow problems while exploiting thread-parallelism within a compute node. A block variant of the asynchronous fine-grain parallel preconditioner adapted to a finite volume discretization of the compressible Navier-Stokes equations on unstructured grids is presented, and convergence theory is extended to the new variant. Experimental (numerical) results on the performance of these preconditioners on inviscid and viscous laminar two-dimensional steady-state test cases are reported. It is found, for these compressible flow problems, that the block variant performs much better in terms of convergence, parallel scalability and reliability than the original scalar asynchronous ILU preconditioner. For viscous flow, it is found that the ordering of unknowns may determine the success or failure of asynchronous block-ILU preconditioning, and an ordering of grid cells suitable for solving viscous problems is presented.
Community Contributions
Found the code? Know the venue? Think something is wrong? Let us know!
๐ Similar Papers
In the same crypt โ Numerical Analysis
R.I.P.
๐ป
Ghosted
R.I.P.
๐ป
Ghosted
Deep learning-based numerical methods for high-dimensional parabolic partial differential equations and backward stochastic differential equations
R.I.P.
๐ป
Ghosted
PDE-Net: Learning PDEs from Data
R.I.P.
๐ป
Ghosted
Efficient tensor completion for color image and video recovery: Low-rank tensor train
R.I.P.
๐ป
Ghosted
Tensor Ring Decomposition
R.I.P.
๐ป
Ghosted
Machine learning approximation algorithms for high-dimensional fully nonlinear partial differential equations and second-order backward stochastic differential equations
Died the same way โ ๐ป Ghosted
R.I.P.
๐ป
Ghosted
Language Models are Few-Shot Learners
R.I.P.
๐ป
Ghosted
PyTorch: An Imperative Style, High-Performance Deep Learning Library
R.I.P.
๐ป
Ghosted
XGBoost: A Scalable Tree Boosting System
R.I.P.
๐ป
Ghosted