Workshop
AI, human rights and regulation: challenges and opportunities
around the AI-Act
Tuesday 22 October 13.00
Organizer: Sneda Das, Technical University of Denmark
Recent breakthroughs in deep-learning and machine learning are positively impacting industries like healthcare, education, and public and government services. However, these advancements also pose ethical and societal challenges. When AI systems are employed in critical decision-making areas such as healthcare, the consequences of biases, errors, and lack of transparency can have a high impact. This necessitates a rigorous technical understanding of both its potential and its risks, especially in the context of emerging regulatory frameworks like the EU’s AI Act.
The EU AI-act presents both challenges and opportunities for human rights due diligence in AI, i.e. the process of identifying, assessing, addressing, and mitigating human rights and ethical harms. The act provides AI developers, deployers, and users with clear requirements and obligations with respect to specific uses of artificial intelligence. With its stated objectives of ensuring that AI systems have no negative impacts on fundamental rights, the Act has implications for all stakeholders including researcher, who develop AI systems for high-risk use cases with commercial value. The adverse impacts that AI can have on fundamental rights, including the right to privacy, protection of personal data, freedom of expression and information, freedom of assembly and of association, nondiscrimination, consumer protection, workers’ rights, rights of persons with disabilities, rights of children, are acknowledged in the proposal.
However, there are a number of aspects that are still to be fleshed out clarified in the implementation of the act. On the technical front, there is no consensus, yet, on the standards and metrics against which AI applications should be assessed for their safety and risks. Furthermore, the transparency, assessment and due diligence processes of AI developers and deployers should be quantified. Some of these aspects are being addressed in the ongoing standardization process of the AI-act. This session addresses what we need from technical, regulatory and policy directions for research and development within AI with safety, equity and accessibility at its core, thereby touching upon D3A′s focus area on ‘Responsible and ethical AI and digital technologies’. In this session, we will bring together academics, legal experts and ethics and human-rights researchers who will discuss these issues and the potential solutions.
Within the 90 minute session, the tentative schedule is as follows:
The panel discussion will begin with moderated questions to the invited experts and then open the floor for audience engagement. The proposed session will explore challenges and opportunities for human rights due diligence in the AI Act. The invited speakers (and panelists) are:
Evolving space of human rights in the digital domain
Rikke Frank Jørgensen, Senior Researcher, Danish Institute for Human Rights
Technical perspective on bias and fairness with potential ethical conflicts
Tareen Dawood, Postdoctoral researcher, Technical University of Denmark, DTU Compute.
Ethics and human rights in AI development and deployment
Brigitte Kofoed, Ph.D., Human Rights Advice
Role of statistics, AI evaluation and risk assessment in the standards of the AI-act
Sneha Das, Assistant professor, Technical University of Denmark, DTU Compute
Towards Human Rights 2.0 – changing landscapes in the age of AI
Sue Ann Teo, LL.M, PhD, Postdoctoral Research Fellow, Raoul Wallenberg Institute & External lecturer at KU
Panel Moderator:
Cathrine Bloch Veiberg, Head of Human Rights and Social Sustainability, United Nations Global Compact Network Denmark.
The target audience is any participant with an interest in or working within the changing landscape of AI research, deployment, and regulation, and potential gaps opportunities. Also, conference participants engaged in applications of data science in disciplines like human-centric computing, social-signal processing, trustworthy AI and related fields may find this session useful.