AI for Oversight

AI for Oversight

Partners: Human Environment and Transport Inspectorate (ILT), Netherlands Labour Authority, Inspectorate of Education, Netherlands Food and Consumer Product Safety Authority, TNO and Utrecht University

Supervision does not mean inspecting everything everywhere continuously; that’s impossible. Choices must be made. The challenge is to inspect precisely where the societal contribution is greatest. How do you achieve a risk-based approach, deploying inspectors as effectively as possible at the right times and places? This is the task for which regulatory bodies are collectively seeking a solution. The use of artificial intelligence (AI) plays a significant role, especially as these become more sophisticated.

Optimal Support through Algorithms “We already use AI where possible for a responsible, selective, and effective deployment of our inspectors. But there are more opportunities ahead,” summarizes Mattheus Wassenaar, Inspector General of ILT, the motivation to start this collaboration. “Together with universities, we will develop methods to ensure that our people are optimally supported by algorithms. Inspectors are scarce, and they don’t generate much data. This means we need algorithms that learn faster with limited data. There is also a focus on preventing unwanted selection bias. We are doing everything to collectively develop AI that can be deployed in the oversight domain responsibly and reliably.”

Developing and Testing New Methods Practical experiences have highlighted the need for new methods in AI within the oversight domain. This led to the current research agenda, where universities are developing methods that align well with practice. All this is done in close collaboration with participating inspectorates. In this way the gap is bridged between theory and practice, where inspectorates and TNO can use the new methods, and universities can build their research on practical case studies. The research agenda focuses on the topics: collaboration between humans and machines, faster and fairer learning algorithms, and the contribution of AI to behaviorimprovement.

How Humans and AI Can Strengthen Each Other “With the use of AI, inspectors gain a colleague,” says Jasper van Vliet, one of the scientific leads of the lab. “It’s a digital colleague with a strong memory, that is tireless and consistent, and can advise inspectors on where they can have the most impact.”

“The strength of AI algorithms is particularly evident with large and complex datasets, while humans excel in individual cases and placing information in the right context,” adds Cor Veenman, another scientific lead of the new lab. “By closely integrating human inspectors and AI systems, you get a very effective team.”

Testing New Approaches Participants of the new ICAI Lab will not only share their knowledge and expertise but will also conduct joint experiments to test new approaches. There is a strong emphasis on the interaction between inspectors and AI applications. This is a crucial success factor in achieving responsible AI that is fair, just, and explainable. Moreover, inspectors can play a vital role in the learning process algorithms must undergo. An additional challenge is that inspections take a lot of time, and obtaining the right data is difficult. This explains why data is so precious and scarce in the oversight domain.

Researchers in the Field with Inspectors “Feedback from inspectors is essential,” emphasizes Van Vliet. “They are familiar with the application area and often have insight into whether an inspection is worthwhile. If we can incorporate this knowledge into the AI learning process, we can learn much faster. How this will work in detail will be the focus of the PhD candidates. They will not only work behind their desks but also in the field, accompanying inspectors to experience how AI can make a difference.”

Preventing the Mirror Effect “It is essential that we support inspectors only with reliable and fair algorithms,” emphasizes Veenman. “In the AI4Oversight Lab, there is ample attention to challenges such as unwanted steering in advice. During data collection, human biases that color the data often occur. If an algorithm then adopts those biases, you encounter the mirror effect. Highly undesirable, of course. The new lab is fully focused on addressing this. In collaboration with all participants, we will develop new forms of data collection and algorithms to counteract the mirror effect.”

AI and Behavior Change It’s essential to realize that inspectorates are not there to impose fines, but their ultimate goal is to contribute to positive behavior changes. AI applications can also contribute to this goal. The new lab aims to develop a data-driven approach that allows modeling the dynamics between behavior and inspections.

Building Bridges Between Theory and Practice The four inspectorates in the lab already see the benefits of deploying AI, but they all face the same challenges. Effectively organizing teamwork and feedback and preventing the mirror effect are high on the agenda. The difficulty lies in the fact that theoretical knowledge about these topics is often not yet tested in practice. Therefore, bridges need to be built between theory and practice. TNO works extensively on this bridge and is happy to invest in the lab: “We collaborate with governments and businesses on the valuable use of AI that can make an impact. Jointly developed methods, grounded in practice, are particularly important,” says Frans van Ette, Program Director AI at TNO.

Collaboration between PhDs and Data Scientists The new collaboration also offers great opportunities for universities. Thomas Dohmen, Director of AI labs at Utrecht University, says, “The accessible and versatile casuistry of the partners in this lab provides a range of opportunities for research. We see it as our joint responsibility to develop concrete usable methods that advance inspections in daily practice. We also offer talented graduates the opportunity to delve into this theme through a PhD trajectory.”

Extending a Hand to Other Inspections The ICAI Lab AI4Oversight currently has funding for five years. This provides enough time to complete PhD research and build a reliable, effective use of AI in oversight. “We want to show that this collaboration benefits all parties and hope that other inspection services will join during the project. So, while we are taking the first step, we are keeping our hand extended to other inspections,” Van Vliet concludes.

Visit the ICAI news message

People involved:


Sofoklis Kitharidis
Sofoklis Kitharidis
Niki van Stein
Assistant Professor of Explainable AI
Prof. Thomas Bäck
Professor of Natural Computing

Related Projects:


CIMPLO

CIMPLO

Cross-Industry Predictive Maintenance Optimization Platform

Read more »
XAIPre

XAIPre

eXplainable AI for Predictive Maintenance

Read more »