Workshop

Verifiable, Robust, and Explainable AI

Wednesday 23 October 10.30

Organizer: Boris Düdder, University of Copenhagen

Artificial intelligence, primarily driven by rapid advances in deep learning technology, has produced exciting results but also an increasing need for methods that not only explain the decisions of machine learning models but also make their performance more robust under changing conditions. These methods are crucial for providing firm guarantees for their behavior regarding aspects like safety, privacy preservation, and non-discrimination. Explainable AI (XAI) plays a crucial role in ensuring the verifiability and robustness of AI systems.

These emerging key issues for the further advancement of AI are being studied both in the AI/ML communities and by researchers from the areas traditionally concerned with the safety and verification of software systems by formal methods, such as model checking and theorem proving. However, while working towards the same goals, the interaction between these different research communities has been limited. This parallel session aims to bridge this gap.


Program

Welcome and introduction, talk, Kim Guldstrand Larsen, AAU, 5 mins

Industrial talk, Gitte Rosenkranz, Project Manager, Digitization/HOFOR, 15 mins

Academic talk, Martijn Goorden, Assistant professor/AAU, 10 mins

Academic talk, Vlad Paul Cosma, PostDoc/KU, 10 mins

Industrial talk, Søren Debois, CTO/DCR Solutions, 15 mins

Academic talk, Axel Christfort, PhD students/KU, 10 mins

Academic talk, Davide Mottin, Associate professor/AU, 10 mins

Discussion, panel, all, 15 mins


Organizers
  • Kim Guldstrand Larsen, Professor, AAU
  • Boris Düdder, Associate Professor, KU
  • Thomas Hildebrandt, Professor, KU
  • Manfred Jaeger, Associate Professor, AAU
  • Jaco van de Pol, Professor, AU
  • Christian Schilling, Associate Professor, AAU

 

Level

Intermediate: For attendees who have a basic understanding or some experience with the subject but are not yet advanced.