18th ACM/SIGEVO Conference on Foundations of Genetic Algorithms

FOGA XVIII, Aug (26) 27 – 29, 2025, Leiden, The Netherlands

FOGA 2025 is a conference organized by ACM/SIGEVO and hosted by the Leiden Institute of Computer Science (LIACS) in Leiden, The Netherlands. The conference will this year take place on three days (Wed 27 – Fri 29).
The conference will be preceded by an informal meeting of the ROAR-NET COST Action on Tuesday, August 26. This meeting is open to everyone, whether they are COST action members or not.

Call for Papers

The FOGA series aims at advancing our understanding of the working principles behind evolutionary algorithms and related randomized search heuristics, such as local search algorithms, differential evolution, ant colony optimization, particle swarm optimization, artificial immune systems, simulated annealing, and other Monte Carlo methods for search and optimization. Connections to related areas, such as Bayesian optimization and direct search, are of interest as well. FOGA is the premier event to discuss advances on the theoretical foundations of these algorithms, tools needed to analyze them, and different aspects of comparing algorithms’ performance. Topics of interest include, but are not limited to:

Submissions covering the entire spectrum of work, ranging from rigorously derived mathematical results to carefully crafted empirical studies, are invited.

List of important dates (all times are “Anywhere on Earth”):

By submitting your article to an ACM Publication, you are hereby acknowledging that you and your co-authors are subject to all ACM Publications Policies, including ACM's new Publications Policy on Research Involving Human Participants and Subjects. Alleged violations of this policy or any ACM Publications Policy will be investigated by ACM and may result in a full retraction of your paper, in addition to other potential penalties, as per ACM Publications Policy.

Please ensure that you and your co-authors obtain an ORCID ID, so you can complete the publishing process for your accepted paper. ACM has been involved in ORCID from the start and we have recently made a commitment to collect ORCID IDs from all of our published authors. We are committed to improve author discoverability, ensure proper attribution and contribute to ongoing community efforts around name normalization; your ORCID ID will help in these efforts.

Keynote Speakers

Joshua D. Knowles
Title: Answering Hamming
Abstract: The story goes that while working at Bell Labs in the 1950s, the mathematician and computer scientist Richard Hamming would ask colleagues, "what's the most important problem in your field?" … and then follow up with, "so, why aren't you working on it?" Both questions have many possible answers, even for just one person at one time, but they are certainly provocative, tough and uncomfortable. In the talk, I will reflect on my personal answers at various times, some answers for evolutionary computation (EC) and evolutionary multiobjective optimization (EMO) more broadly, and for adjacent fields to EC/EMO as well as for industrial research & innovation. My particular answers (or anyone's) are almost certainly not as important as the effort behind them to grapple with the questions.

Bio: Based in the UK, Joshua Knowles is a scientific advisor for the multinational energy technology company SLB, an honorary professor in the decision sciences group of Alliance Manchester Business School at The University of Manchester, and a former professor of natural computation at the University of Birmingham. Publishing in evolutionary multiobjective optimization (EMO) since the late 90s, his work includes fundamental research on archiving with diversity, local search, performance assessment, hypervolume-as-selection, machine decision makers, heterogeneous objectives, and “multiobjectivization”. In 2004-5, he developed the influential multiobjective Bayesian optimization method, ParEGO, for expensive problems. More broadly, Josh is interested in and has published (joint work) on the evolution of evolvability, the evolution of cooperation, neutral evolution, and symbiogenesis (including Deep Optimization). He has also described evolutionary and ML applications work in premier journals in astrophysics, analytical chemistry, theoretical biology, bioinformatics, and operations research.
Stephanie Wehner
Delft University of Technology, The Netherlands
Tobias Glasmachers
Title: Additive drift is all you need -- if you are an evolution strategy.
Abstract: Drift analysis is a great tool for proving that optimization algorithms work the way we think they do, and for analyzing them, potentially ingreat detail. In this talk I will discuss drift analysis for evolutionstrategies. These algorithms exhibit linear convergence on a wide rangeof problems, which corresponds to a linear decrease of the logarithmicdistance of the best-so-far sample from the optimum, giving rise tosimple additive drift. That behavior is enabled by online adaptation ofthe step size, which decays at the same rate as the distance to theoptimum.Moreover, modern evolution strategies like CMA-ES adapt not only thestep size, but rather the full covariance matrix of their samplingdistribution. The mechanism enables convergence at a problem-independentrate that depends only on the dimension of the search space. The primarychallenge of proving the convergence of CMA-ES lies in establishing thestability of the adaptation process, which was recently achieved byanalyzing the invariant Markov chain that describes the parameteradaptation process. Yet, a drift-based analysis is still desirablebecause it can yield much more fine-grained results. For instance, itcan provide details about the transient adaptation phase, which oftentakes up the lion's share of the time for solving the problem.To achieve this, we need a potential function that appropriatelypenalizes unsuitable parameter configurations, or more precisely,configurations the algorithm tends to move away from. Designing apotential function that captures the dynamics of covariance matrixadaptation is an ongoing challenge. I will present our recent researchefforts towards this goal and emphasize why relatively simple additivedrift offers a powerful framework for achieving it.

Bio: Tobias Glasmachers is a professor at the Ruhr-University Bochum, Germany. He received his Diploma and Doctorate degrees in mathematics from the Ruhr-University of Bochum in 2004 and 2008. He joined the Swiss AI lab IDSIA from 2009 to 2011. Then he returned to Bochum, where he was a junior professor for machine learning at the Institute for Neural Computation (INI) from 2012 to 2018. In 2018 he was promoted to a full professor. His research interests are machine learning and optimization.

News

Older posts…

Organizers

Anna V. Kononova
General Chair
Thomas Bäck
General Chair
Niki van Stein
Proceedings Chair
Elena Raponi
Local Chair
Carola Doerr
Publicity Chair