Deep Dive workshop

Verifiable, Robust and explainable Ai

Wednesday 27 August 9.00

Organizer: Christian Schilling, Aalborg University

Driven by rapid advances in machine learning, artificial intelligence (AI) has produced exciting results across various scientific disciplines and practical applications. This success of machine-learned models comes with the downside that they are widely considered a black box. This leads to new challenges and an increasing need for methods to:

  1. show that the models behave correctly
  2. make the performance of the models robust under changing conditions, and
  3. explain the decisions made by the models. 

Such methods are crucial for building trust in AI systems by providing guarantees for the behavior regarding aspects like safety and privacy, and by enhancing transparency and enabling the identification of potential biases or errors. These emerging key issues for the further advancement of AI are being studied both in the AI/ML communities and by researchers from the areas traditionally concerned with the safety and verification of software systems by formal methods (FM). 

This workshop aims to build a bridge between these communities working toward the same goals and to exchange ideas and scientific approaches for tackling the challenges of building trustworthy, robust, and explainable AI (XAI) systems, offering a unique opportunity for interdisciplinary collaboration and knowledge exchange. 

The program includes speakers from both the FM and XAI angles to ensure a broad perspective.

Program

9:00–9:05 Introduction


9:05–9:46 Martin Leucker: High-Level Perspectives on Neural Network Verification

9:46–10:08 Tommy Sonne Alstrøm: Explainable AI for time series


10:08–10:30 Davide Mottin: Robustness in Knowledge Representation: Methods and Challenges


10:30–10:45 Coffee break


10:45–11:07 Lars Kai Hansen: Levels of Human-AI Alignment


11:07–11:29 Mikkel Baun Kjærgaard: Robust AI Support for Software Developers


11:29–11:51 Akhil Arora: Prakāśa: Quantifying the Underlying (In)consistency of LLM Reasoning


11:51–12:13 Thomas Bolander: Epistemic planning: From automated planning and epistemic logic to socially intelligent robots


12:13–12:15 Closing

Organizers