We are excited to announce the Benchmarking single- and multi-objective optimization algorithms: how to make your experimental data more valuable at GECCO 2025.
Comparing and evaluating optimization algorithms by empirical means is an important – and probably the most commonly applied – approach to gaining insight into evolutionary computation methods. However, while our community tends to agree that generating and analyzing sound benchmarking data is far from trivial, we treat the process in a rather wasteful manner, giving little importance to a standardization of data records, data sharing, and similar. With this tutorial, we will share our experience on how to boost the efficacy of our benchmarking efforts at almost no cost using the IOHprofiler software framework and its recent extensions to anytime performance measures and multi-objective optimization. A strong focus will be put on demonstrating the ease by which IOHprofiler modules can be combined with other benchmarking and optimization toolboxes such as COCO, Nevergrad, and Pymoo. We will also discuss how benchmarking data can be more easily shared within the community and the benefits that this brings, in terms of core research contributions, but also towards more sustainable research practices in evolutionary computation.