Runtime Analysis of the $(1+(λ,λ))$ Genetic Algorithm on Random Satisfiable 3-CNF Formulas
April 14, 2017 · Declared Dead · 🏛 Annual Conference on Genetic and Evolutionary Computation
"No code URL or promise found in abstract"
Evidence collected by the PWNC Scanner
Authors
Maxim Buzdalov, Benjamin Doerr
arXiv ID
1704.04366
Category
cs.NE: Neural & Evolutionary
Citations
65
Venue
Annual Conference on Genetic and Evolutionary Computation
Last Checked
1 month ago
Abstract
The $(1+(λ,λ))$ genetic algorithm, first proposed at GECCO 2013, showed a surprisingly good performance on so me optimization problems. The theoretical analysis so far was restricted to the OneMax test function, where this GA profited from the perfect fitness-distance correlation. In this work, we conduct a rigorous runtime analysis of this GA on random 3-SAT instances in the planted solution model having at least logarithmic average degree, which are known to have a weaker fitness distance correlation. We prove that this GA with fixed not too large population size again obtains runtimes better than $Θ(n \log n)$, which is a lower bound for most evolutionary algorithms on pseudo-Boolean problems with unique optimum. However, the self-adjusting version of the GA risks reaching population sizes at which the intermediate selection of the GA, due to the weaker fitness-distance correlation, is not able to distinguish a profitable offspring from others. We show that this problem can be overcome by equipping the self-adjusting GA with an upper limit for the population size. Apart from sparse instances, this limit can be chosen in a way that the asymptotic performance does not worsen compared to the idealistic OneMax case. Overall, this work shows that the $(1+(λ,λ))$ GA can provably have a good performance on combinatorial search and optimization problems also in the presence of a weaker fitness-distance correlation.
Community Contributions
Found the code? Know the venue? Think something is wrong? Let us know!
📜 Similar Papers
In the same crypt — Neural & Evolutionary
R.I.P.
👻
Ghosted
R.I.P.
👻
Ghosted
Progressive Growing of GANs for Improved Quality, Stability, and Variation
R.I.P.
👻
Ghosted
Learning both Weights and Connections for Efficient Neural Networks
R.I.P.
👻
Ghosted
LSTM: A Search Space Odyssey
R.I.P.
👻
Ghosted
A Baseline for Detecting Misclassified and Out-of-Distribution Examples in Neural Networks
R.I.P.
👻
Ghosted
An Introduction to Convolutional Neural Networks
Died the same way — 👻 Ghosted
R.I.P.
👻
Ghosted
Language Models are Few-Shot Learners
R.I.P.
👻
Ghosted
PyTorch: An Imperative Style, High-Performance Deep Learning Library
R.I.P.
👻
Ghosted
XGBoost: A Scalable Tree Boosting System
R.I.P.
👻
Ghosted