Leiden, The Netherlands – The Natural Computing research cluster at LIACS, Leiden University is proud to share that ten of our papers — plus three Hot Off the Press and one Competition paper — have been accepted for presentation at the Genetic and Evolutionary Computation Conference (GECCO) 2025.
GECCO is the premier venue in the field of evolutionary computation, drawing international researchers working at the forefront of algorithm design, optimization, and learning. This year’s accepted contributions highlight the breadth of our ongoing work — from LLM-assisted algorithm discovery and quantum-inspired optimization, to robust strategies for noisy and diverse search spaces.
Our Accepted Contributions
Utility-aware Social Network Anonymization using Genetic Algorithms
Authors: Samuel Bonello, Rachel G. de Jong, Thomas H. W. Bäck, Frank W. Takes
Abstract: Social networks may contain privacy-sensitive information about individuals. The objective of the network anonymization problem is to alter a given social network dataset such that the number of anonymous nodes in the social graph is maximized. Here, a node is anonymous if it does not have a unique surrounding network structure. At the same time, the aim is to ensure data utility, i.e., preserve topological network properties and retain good performance on downstream network analysis tasks. We propose two versions of a genetic algorithm tailored to this problem: one generic GA and a uniqueness-aware GA (UGA). The latter aims to target edges more effectively during mutation by avoiding edges connected to already anonymous nodes. After hyperparameter tuning, we compare the two GAs against two existing baseline algorithms on several real-world network datasets. Results show that the proposed genetic algorithms manage to anonymize on average 14 times more nodes than the best baseline algorithm. Additionally, data utility experiments demonstrate how the UGA requires fewer edge deletions, and how our GAs and the baselines retain performance on downstream tasks equally well. Overall, our results suggest that genetic algorithms are a promising approach for finding solutions to the network anonymization problem.
Abnormal Mutations: Evolution Strategies Don’t Require Gaussianity
Authors: Jacob de Nobel, Diederick Vermetten, Hao Wang, Anna V. Kononova, Günter Rudolph, Thomas Bäck
Abstract: The mutation process in evolution strategies has been interlinked with the normal distribution since its inception. Many lines of reasoning have been given for this strong dependency, ranging from maximum entropy arguments to the need for isotropy. However, some theoretical results suggest that other distributions might lead to similar local convergence properties. This paper empirically shows that a wide range of evolutionary strategies, from the (1+1)-ES to CMA-ES, show comparable optimization performance when using a mutation distribution other than the standard Gaussian. Replacing it with, e.g., uniformly distributed mutations, does not deteriorate the performance of ES, when using the default adaptation mechanism for the strategy parameters. We observe that these results hold not only for the sphere model but also for a wider range of benchmark problems.
Evolving Hard Maximum Cut Instances for Quantum Approximate Optimization Algorithms
Authors: Shuaiqun Pan, Yash J. Patel, Aneta Neumann, Frank Neumann, Thomas Bäck, Hao Wang
Abstract: Variational quantum algorithms, such as the Recursive Quantum Approximate Optimization Algorithm (RQAOA), have become increasingly popular, offering promising avenues for employing Noisy Intermediate-Scale Quantum devices to address challenging combinatorial optimization tasks like the maximum cut problem. In this study, we utilize an evolutionary algorithm equipped with a unique fitness function. This approach targets hard maximum cut instances within the latent space of a Graph Autoencoder, identifying those that pose significant challenges or are particularly tractable for RQAOA, in contrast to the classic Goemans and Williamson algorithm. Our findings not only delineate the distinct capabilities and limitations of each algorithm but also expand our understanding of RQAOA’s operational limits. Furthermore, the diverse set of graphs we have generated serves as a crucial benchmarking asset, emphasizing the need for more advanced algorithms to tackle combinatorial optimization challenges. Additionally, our results pave the way for new avenues in graph generation research, offering exciting opportunities for future explorations.
An Adaptive Re-evaluation Method for Evolution Strategy under Additive Noise
Authors: Catalin-Viorel Dinu, Yash J. Patel, Xavier Bonet-Monroig, Hao Wang
Abstract: The Covariance Matrix Adaptation Evolutionary Strategy (CMA-ES) is one of the most advanced algorithms in numerical black-box optimization. For noisy objective functions, several approaches were proposed to mitigate the noise, e.g., re-evaluations of the same solution or adapting the population size. In this paper, we devise a novel method to adaptively choose the optimal re-evaluation number for function values corrupted by additive Gaussian white noise. We derive a theoretical lower bound of the expected improvement achieved in one iteration of CMA-ES, given an estimation of the noise level and the Lipschitz constant of the function’s gradient. Solving for the maximum of the lower bound, we obtain a simple expression of the optimal re-evaluation number. We experimentally compare our method to the state-of-the-art noise-handling methods for CMA-ES on a set of artificial test functions across various noise levels, optimization budgets, and dimensionality. Our method demonstrates significant advantages in terms of the probability of hitting near-optimal function values.
Transfer Learning of Surrogate Models: Integrating Domain Warping and Affine Transformations
Authors: Shuaiqun Pan, Diederick Vermetten, Manuel López-Ibáñez, Thomas Bäck, Hao Wang
Abstract: Surrogate models provide efficient alternatives to computationally demanding real-world processes but often require large datasets for effective training. A promising solution to this limitation is the transfer of pre-trained surrogate models to new tasks. Previous studies have investigated the transfer of differentiable and non-differentiable surrogate models, typically assuming an affine transformation between the source and target functions. This paper extends previous research by addressing a broader range of transformations, including linear and nonlinear variations. Specifically, we consider the combination of an unknown input warping—such as one modeled by the beta cumulative distribution function—with an unspecified affine transformation. Our approach achieves transfer learning by employing a limited number of data points from the target task to optimize these transformations, minimizing empirical loss on the transfer dataset. We validate the proposed method on the widely used Black-Box Optimization Benchmark (BBOB) testbed and a real-world transfer learning task from the automobile industry. The results underscore the significant advantages of the approach, revealing that the transferred surrogate significantly outperforms both the original surrogate and the one built from scratch using the transfer dataset, particularly in data-scarce scenarios.
Code Evolution Graphs: Understanding Large Language Model Driven Design of Algorithms
Authors: Niki van Stein, Anna V. Kononova, Lars Kotthoff, Thomas Bäck
Abstract: Large Language Models (LLMs) have demonstrated great promise in generating code, especially when used inside an evolutionary computation framework to iteratively optimize the generated algorithms. However, in some cases they fail to generate competitive algorithms or the code optimization stalls, and we are left with no recourse because of a lack of understanding of the generation process and generated codes. We present a novel approach to mitigate this problem by enabling users to analyze the generated codes inside the evolutionary process and how they evolve over repeated prompting of the LLM. We show results for three benchmark problem classes and demonstrate novel insights. In particular, LLMs tend to generate more complex code with repeated prompting, but additional complexity can hurt algorithmic performance in some cases. Different LLMs have different coding ``styles’’ and generated code tends to be dissimilar to other LLMs. These two findings suggest that using different LLMs inside the code evolution frameworks might produce higher performing code than using only one LLM.
Cascading CMA-ES Instances for Generating Input-diverse Solution Batches
Authors: Maria Laura Santoni, Christoph Dürr, Carola Doerr, Mike Preuss, and Elena Raponi
Abstract: Rather than obtaining a single good solution for a given optimization problem, users often seek alternative design choices, because the best-found solution may perform poorly with respect to additional objectives or constraints that are difficult to capture into the modeling process.
Aiming for batches of diverse solutions of high quality is often desirable, as it provides flexibility to accommodate post-hoc user preferences. At the same time, it is crucial that the quality of the best solution found is not compromised.
One particular problem setting balancing high quality and diversity is fixing the required minimum distance between solutions while simultaneously obtaining the best possible fitness.
Recent work by Santoni et al. [arXiv 2024] revealed that this setting is not well addressed by state-of-the-art algorithms, performing in par or worse than pure random sampling.
Driven by this important limitation, we propose a new approach, where parallel runs of the covariance matrix adaptation evolution strategy (CMA-ES) inherit tabu regions in a cascading fashion.
We empirically demonstrate that our CMA-ES-Diversity Search (CMA-ES-DS) algorithm generates trajectories that allow to extract high-quality solution batches that respect a given minimum distance requirement, clearly outperforming those obtained from off-the-shelf random sampling, multi-modal optimization algorithms, and standard CMA-ES.
Optimizing Photonic Structures with Large Language Model Driven Algorithm Discovery
Authors: Haoran Yin, Anna V. Kononova, Thomas Bäck, Niki van Stein
Abstract: We study how large language models can be used in combination with evolutionary computation techniques to automatically discover optimization algorithms for the design of photonic structures. Building on the Large Language Model Evolutionary Algorithm (LLaMEA) framework, we introduce structured prompt engineering tailored to multilayer photonic problems such as Bragg mirror, ellipsometry inverse analysis, and solar cell antireflection coatings. We systematically explore multiple evolutionary strategies, including (1+1), (1+5), (2+10), and others, to balance exploration and exploitation. Our experiments show that LLM-generated algorithms, generated using small-scale problem instances, can match or surpass established methods like quasi-oppositional differential evolution on large-scale realistic real-world problem instances. Notably, LLaMEA’s self-debugging mutation loop, augmented by automatically extracted problem-specific insights, achieves strong anytime performance and reliable convergence across diverse problem scales. This work demonstrates the feasibility of domain-focused LLM prompts and evolutionary approaches in solving optical design tasks, paving the way for rapid, automated photonic inverse design.
BLADE: Benchmark suite for LLM-driven Automated Design and Evolution of iterative optimisation heuristics
Authors: Niki van Stein, Anna V. Kononova, Haoran Yin, Thomas Bäck
Abstract: The application of Large Language Models (LLMs) for Automated Algorithm Discovery (AAD), particularly for optimisation heuristics, is an emerging field of research. This emergence necessitates robust, standardised benchmarking practices to rigorously evaluate the capabilities and limitations of LLM-driven AAD methods and the resulting generated algorithms, especially given the opacity of their design process and known issues with existing benchmarks. To address this need, we introduce BLADE (Benchmark suite for LLM-driven Automated Design and Evolution), a modular and extensible framework specifically designed for benchmarking LLM-driven AAD methods in a continuous black-box optimisation context. BLADE integrates collections of benchmark problems (including MA-BBOB and SBOX-COST among others) with instance generators and textual descriptions aimed at capability-focused testing, such as generalisation, specialisation and information exploitation. It offers flexible experimental setup options, standardised logging for reproducibility and fair comparison, incorporates methods for analysing the AAD process (e.g., Code Evolution Graphs and various visualisation approaches) and facilitates comparison against human-designed baselines through integration with established tools like IOHanalyser and IOHexplainer. BLADE provides an `out-of-the-box’ solution to systematically evaluate LLM-driven AAD approaches. The framework is demonstrated through two distinct use cases exploring mutation prompt strategies and function specialisation.
Hot Off the Press
- LLaMEA: A Large Language Model Evolutionary Algorithm for Automatically Generating Metaheuristics
- Explainable Benchmarking for Iterative Optimization Heuristics
- A Newton Method for Hausdorff Approximations of the Pareto Front within Multi-objective Evolutionary Algorithms
Competition Track
- Neighborhood Adaptive Differential Evolution
Wrapping Up
These papers reflect our ongoing focus on foundational methods, real-world relevance, and emerging trends like LLMs, surrogate modeling, and quantum-inspired optimization. We’re excited to share and discuss these ideas with the GECCO community in July.
Stay tuned for more updates and deeper dives as we approach the conference.
Follow us on LinkedIn for news, preprints, and post-conference impressions.