Workshop
Self-Supervised Learning from Time Series
Tuesday 22 October 13.30
Organizer: Lars Kai Hansen, Technical University of Denmark
Self-supervision is a core mechanism of modern AI. Self-supervision is used to build representations in many domains including text, images, audio and video [Balestriero et al., 2023]. While many successful applications of self-supervised learning (SSL) are reported, most current results are empirical and engineering in nature, leaving fundamental issues open [Michaud et al., 2023; Bordelon et al., 2024].
With a specific focus on temporal data such as audio and EEG we ask how architectures, loss functions and learning processes together shape learning from large unlabeled temporal data sets. Explainability of learned representations is an important open issue for time series data. The session is motivated by research in the Pioneer Centre Collaboratory “Signals and Decoding”. We focus on open problems and ideas for new research including access to data and models.
Deep Learning for Decoding Attended Speech from Brain Responses: Current Insights and Future Directions
Principal Scientist, Emina Alickovic, Eriksholm Research Center (30 min)
Explainable AI
Associate Professor, Kristoffer Wickstrøm, Artic Univ. of Norway (30 min)
SSL for EEG
Phd student Lina Skerath, DTU (10 min)
SSL for Audio
Phd student Sarthak Yadav, Aalborg Univ. (10 min)
Discussion of SSL grand challenges (10 min)
Advanced