Workshop
Resource-Aware Machine Learning
Tuesday 26 August 10.45
Organizer: Pinar Tözün, IT University of Copenhagen
According to the 2024 AI Index Report, the computational footprint of the state-of-the-art language models has seen a 7 orders of magnitude growth since 2017, while the cost to train these models in the cloud increased 5 orders of magnitude. As a result, the estimated carbon footprint of state-of-the-art large language model is equivalent to 50-90 human years. This makes the current rate of increase in model parameters, datasets, and compute budget unsustainable. To achieve more sustainable progress in ML in the future, it is essential to invest in more resource-/energy-/cost-efficient solutions.
In this session, we will explore how to make ML’s computational and carbon footprint more transparent and improve ML resource efficiency through software/hardware co-design. We plan to take a holistic view of the ML landscape, which includes data preparation and loading, continual retraining of models in dynamic data environments, compiling ML on specialized hardware accelerators, and serving models for real-time applications with low-latency requirements and constrained resource environments.
The session aims to reason critically about how we build software and hardware for end-to-end machine learning and creating policies to ensure more sustainable developments in this field. We are thus expecting to see interest and contribution from academics and industry across the fields of data management, machine learning, systems, and computer architecture.
Increased awareness of the resource needs of ML and ways to reduce them.
Brief introduction of the session (10 minutes)
Talks from speakers approaching the challenge of sustainability of ML from different angles (60 minutes)
(7-10 minutes/talk, 3-5 minutes/Q&A)
Open discussion with the audience (20 minutes)
Intermediate: For attendees who have basic understanding or some experience with the
subject but are not yet advanced.