18th ACM/SIGEVO Conference on Foundations of Genetic Algorithms

FOGA XVIII, Aug 27 – 29, 2025, Leiden, The Netherlands

FOGA 2025 is a conference organized by ACM/SIGEVO and hosted by the Leiden Institute of Computer Science (LIACS) in Leiden, The Netherlands. The conference will this year take place on three days (Wed – Fri)

Call for Papers

The FOGA series aims at advancing our understanding of the working principles behind evolutionary algorithms and related randomized search heuristics, such as local search algorithms, differential evolution, ant colony optimization, particle swarm optimization, artificial immune systems, simulated annealing, and other Monte Carlo methods for search and optimization. Connections to related areas, such as Bayesian optimization and direct search, are of interest as well. FOGA is the premier event to discuss advances on the theoretical foundations of these algorithms, tools needed to analyze them, and different aspects of comparing algorithms’ performance. Topics of interest include, but are not limited to:

Submissions covering the entire spectrum of work, ranging from rigorously derived mathematical results to carefully crafted empirical studies, are invited.

List of important dates (all times are “Anywhere on Earth”):

By submitting your article to an ACM Publication, you are hereby acknowledging that you and your co-authors are subject to all ACM Publications Policies, including ACM's new Publications Policy on Research Involving Human Participants and Subjects. Alleged violations of this policy or any ACM Publications Policy will be investigated by ACM and may result in a full retraction of your paper, in addition to other potential penalties, as per ACM Publications Policy.

Please ensure that you and your co-authors obtain an ORCID ID, so you can complete the publishing process for your accepted paper. ACM has been involved in ORCID from the start and we have recently made a commitment to collect ORCID IDs from all of our published authors. We are committed to improve author discoverability, ensure proper attribution and contribute to ongoing community efforts around name normalization; your ORCID ID will help in these efforts.

Keynote Speakers

Joshua D. Knowles
Title: To be announced.
Bio: Based in the UK, Joshua Knowles is a scientific advisor for the multinational energy technology company SLB, and is a former Professor of Natural Computation at the University of Birmingham where he is presently an honorary senior fellow. He is also an honorary professor in the decision sciences group of Alliance Manchester Business School at The University of Manchester. A central figure in evolutionary multiobjective optimization (EMO) since the late 90s, his work includes fundamental research on archiving with diversity, performance assessment, local search, hypervolume-as-selection, machine decision makers, heterogeneous objectives, and “multiobjectivization”. In 2004-5, he developed the influential multiobjective Bayesian optimization method, ParEGO, for expensive problems. More broadly, Josh is interested in and has published (joint work) on the evolution of evolvability, the evolution of cooperation, neutral evolution, and symbiogenesis (including Deep Optimization). Collaborating across disciplines, Josh has published applications of evolutionary and ML methods in premier journals in astrophysics, analytical chemistry, theoretical biology, bioinformatics, and operations research. In 2024, he was part of an international team that automated the design-engineering of drill bits for oilfield or geothermal drilling by EMO methods with deployment and testing in live operations. He is presently involved in pitching for investment in super-efficient heat exchanger technologies (for hyperscale data centers) partly designed by evolutionary methods.
Stephanie Wehner
Delft University of Technology, The Netherlands
Tobias Glasmachers
Title: Additive drift is all you need -- if you are an evolution strategy.
Abstract: Drift analysis is a great tool for proving that optimization algorithms work the way we think they do, and for analyzing them, potentially ingreat detail. In this talk I will discuss drift analysis for evolutionstrategies. These algorithms exhibit linear convergence on a wide rangeof problems, which corresponds to a linear decrease of the logarithmicdistance of the best-so-far sample from the optimum, giving rise tosimple additive drift. That behavior is enabled by online adaptation ofthe step size, which decays at the same rate as the distance to theoptimum.Moreover, modern evolution strategies like CMA-ES adapt not only thestep size, but rather the full covariance matrix of their samplingdistribution. The mechanism enables convergence at a problem-independentrate that depends only on the dimension of the search space. The primarychallenge of proving the convergence of CMA-ES lies in establishing thestability of the adaptation process, which was recently achieved byanalyzing the invariant Markov chain that describes the parameteradaptation process. Yet, a drift-based analysis is still desirablebecause it can yield much more fine-grained results. For instance, itcan provide details about the transient adaptation phase, which oftentakes up the lion's share of the time for solving the problem.To achieve this, we need a potential function that appropriatelypenalizes unsuitable parameter configurations, or more precisely,configurations the algorithm tends to move away from. Designing apotential function that captures the dynamics of covariance matrixadaptation is an ongoing challenge. I will present our recent researchefforts towards this goal and emphasize why relatively simple additivedrift offers a powerful framework for achieving it.

News

Older posts…

Organizers

Anna V. Kononova
General Chair
Thomas Bäck
General Chair
Niki van Stein
Proceedings Chair
Elena Raponi
Local Chair
Carola Doerr
Publicity Chair