UN issues recommendations on artificial intelligence and racial profiling by law enforcement2 min read
On 24 November 2020, the United Nations Committee on the Elimination of Racial Discrimination (the Committee) issued General Recommendation No. 36 on preventing and combating racial profiling by law enforcement officials, including in respect of the use of artificial intelligence (AI).
As noted by the Committee, the increasing use of new technologies, including AI, in areas such as security, border control and access to social services has the potential to deepen racism, racial discrimination, xenophobia and other forms of exclusion. Furthermore, while AI in decision-making processes can contribute to greater effectiveness in some areas, there is also a real risk of algorithmic bias when AI is used in decision-making in the context of law enforcement, including in respect of algorithmic profiling.
According to the Committee: “There are various entry points for bias that could be integrated into algorithmic profiling systems, ranging from the way in which these systems are designed, decisions as to the origin and scope of the datasets on which these systems are trained, societal and cultural biases that developers may build into those datasets, the artificial intelligence models themselves and the way in which the outputs of the artificial intelligence model are implemented in practice.” In this regard, particular risks emerge when algorithmic profiling is used for determining the likelihood of criminal activity either in certain localities, by certain groups or individuals. Added to this, the increasing use of facial recognition and surveillance technologies to track and control specific demographics raises concerns with respect to various fundamental rights, including privacy, freedom of peaceful assembly and association, freedom of expression and freedom of movement.
The recommendations from the Committee regarding the use of AI included the following:
- States should ensure full compliance with international human rights law of algorithmic profiling systems used for the purposes of law enforcement.
- States should carefully assess the human rights impact prior to employing facial recognition technology that can lead to misidentification owing to a lack of representation in data collection.
- States should ensure that algorithmic profiling systems deployed for law enforcement purposes are designed for transparency and make allowances for researchers and civil society to access the code for scrutiny.
- States should take all appropriate measures to ensure transparency of the use of algorithmic profiling systems.
- States should adopt measures to ensure human rights compliance of private sector design, deployment and implementation of AI systems in the area of law enforcement.
- States should ensure that all instances of algorithmic bias are duly investigated and sanctioned.
- States should ensure that companies that are developing, selling or operating algorithmic profiling systems for law enforcement purposes have the responsibility to involve individuals from multiple disciplines, such as sociology, political science, computer science and law, to define the risks and to ensure respect for human rights.
- Human rights bodies, states, national human rights institutions and civil society organisations should carry out studies, disseminate results and good practices on effective measures addressing racial biases derived from artificial intelligence, including those related to the human rights compliance and ethical aspects of machine learning and the relevant criteria in terms of interpretation or transparency in the processes of programming and training of the algorithms.
The full text of General Recommendation No. 36 is accessible here.
Please note: The information contained in this note is for general guidance on matters of interest, and does not constitute legal advice. For any enquiries, please contact us at [email protected].