Statistical Quality and Reproducibility of Pseudorandom Number Generators in Machine Learning technologies

July 02, 2025 ยท Declared Dead ยท ๐Ÿ› International Journal of Data Informatics and Intelligent Computing

๐Ÿ‘ป CAUSE OF DEATH: Ghosted
No code link whatsoever

"No code URL or promise found in abstract"

Evidence collected by the PWNC Scanner

Authors Benjamin A. Antunes arXiv ID 2507.03007 Category cs.OH: Other CS Cross-listed cs.CR, cs.LG Citations 0 Venue International Journal of Data Informatics and Intelligent Computing Last Checked 1 month ago
Abstract
Machine learning (ML) frameworks rely heavily on pseudorandom number generators (PRNGs) for tasks such as data shuffling, weight initialization, dropout, and optimization. Yet, the statistical quality and reproducibility of these generators-particularly when integrated into frameworks like PyTorch, TensorFlow, and NumPy-are underexplored. In this paper, we compare the statistical quality of PRNGs used in ML frameworks (Mersenne Twister, PCG, and Philox) against their original C implementations. Using the rigorous TestU01 BigCrush test suite, we evaluate 896 independent random streams for each generator. Our findings challenge claims of statistical robustness, revealing that even generators labeled ''crush-resistant'' (e.g., PCG, Philox) may fail certain statistical tests. Surprisingly, we can observe some differences in failure profiles between the native and framework-integrated versions of the same algorithm, highlighting some implementation differences that may exist.
Community shame:
Not yet rated
Community Contributions

Found the code? Know the venue? Think something is wrong? Let us know!

๐Ÿ“œ Similar Papers

In the same crypt โ€” Other CS

Died the same way โ€” ๐Ÿ‘ป Ghosted