LLM-driven AAD

LLM-driven AAD

We are excited to announce the LLM driven Automated Algorithm Design tutorial at CEC 2026.

Large language models (LLMs) are transforming the way we create and automate AI techniques/algorithms discoveries. This shift moves us beyond just hyperparameter tuning and automated selection into the realms of fully automated algorithm design (AAD), architecture and end-to-end pipelines discovery, effectively closing the loop between ideation and evaluation. This tutorial provides an overview of the rapidly evolving landscape, including frameworks like EASE, LLaMEA, LHNS, MCTS-AHD, PartEVO, AlphaEvolve, FunSearch, and emerging Gen-AI-driven AI assistants. We specifically contrast two of our own-developed frameworks: the architecture of EASE (Effortless Algorithmic Solution Evolution) with the evolutionary-focused approach of LLaMEA (Large Language Model Evolutionary Algorithm). We will explore the EASE as a practical, fully modular framework for iterative, closed-loop generation and evaluation. Beyond just algorithm code, EASE can also iteratively generate text and graphics. Following that, the tutorial will shift its focus on the LLaMEA framework and its connection with benchmarking ecosystem IOH, recent advancements, including the hyperparameter optimization toolkit LLaMEA-HPO & LLaMEA-BO, and unique benchmarking suite for automated algorithms discovery – BLADE. Participants in this tutorial will have a unique opportunity to listen to two seemingly competing teams and learn about two frameworks in one place, and find out how to collaborate effectively in this rapidly developing area and complement each other with partial knowledge leading to greater efficiency and opportunities for global deployment of these frameworks for AAD. The format of the tutorial is demo-driven (no hands-on activities): we will feature short videos and narrated walkthroughs of the EASE frontend and backend and LLaMEA examples. These will illustrate task setup, fitness and evaluation loops, and result inspection. Additionally, we will provide links to EASE/LLaMEA variants, documentation, and benchmarking environments. We will highlight key guardrails, including testing, analysis, and time/resource caps, as well as best practices in evaluation and benchmarking. Attendees will emerge with practical criteria for choosing between LLaMEA and EASE, methods for responsible evaluation, and steps for incorporating LLM-driven discovery into AI research. Building upon these frameworks, we will further discuss the orchestration problem — how ensembles of small and large language models can cooperatively drive algorithmic discovery. This involves defining coordination layers where smaller, specialized models perform constrained optimization, symbolic reasoning, or surrogate evaluation, while larger foundation models handle global synthesis and hypothesis generation. We will outline a multi-agent AutoML orchestration paradigm that integrates human oversight, modular LLM agents, and evolutionary search within a closed feedback ecosystem.

We will also demonstrate how human-in-the-loop co-discovery mechanisms can inject domain expertise and interpretability into this process, ensuring that automated exploration remains guided, explainable, and auditable. Finally, the tutorial will introduce a statistical validation layer that grounds algorithmic discovery in rigorous evidence—embedding Bayesian comparison, stochastic dominance tests, and sequential evaluation directly into workflows. These methods establish reproducible, uncertainty-aware benchmarks, turning LLMdriven discovery into a scientifically testable process rather than a heuristic exploration. Together, these ideas chart a practical path toward Auto-Science systems that both generate and justify new algorithms.

Organizers:

Niki van Stein
Associate Professor of Explainable AI