This work is divided into two parts. The first analyses the risks of making discriminatory decisions related to the use of a specific type of artificial intelligence known as machine learning, identifying the three phases of the algorithmic decision in which the conditions to produce discriminatory effects can be placed. Effects which may arise: from the prejudices of the programmer or the organisation in which he operates; from the set of data used to feed the IT system; from the not infrequent circumstance that the machine learning system itself identifies some characteristics that indirectly refer to protected categories in a substantially autonomous way. The second part of the work identifies the tools offered by law to deal with the dangers mentioned, highlighting the need to extend the subject of the legal framework, which must essentially address two different profiles of the algorithmic decision: an “internal” profile, which relates to the functioning of artificial intelligence, in the three distinct relevant phases that have been identified; an “external” profile to the functioning of artificial intelligence, which emphasises the role of the algorithm in the final decision: it is essentially the significance of the algorithm, that is, the possibility of a human intervention as a control function, aimed at mitigating the discriminatory effects of the decision model developed by the software.

Decisión algorítmica y principio de igualdad

P. Zuddas
2022-01-01

Abstract

This work is divided into two parts. The first analyses the risks of making discriminatory decisions related to the use of a specific type of artificial intelligence known as machine learning, identifying the three phases of the algorithmic decision in which the conditions to produce discriminatory effects can be placed. Effects which may arise: from the prejudices of the programmer or the organisation in which he operates; from the set of data used to feed the IT system; from the not infrequent circumstance that the machine learning system itself identifies some characteristics that indirectly refer to protected categories in a substantially autonomous way. The second part of the work identifies the tools offered by law to deal with the dangers mentioned, highlighting the need to extend the subject of the legal framework, which must essentially address two different profiles of the algorithmic decision: an “internal” profile, which relates to the functioning of artificial intelligence, in the three distinct relevant phases that have been identified; an “external” profile to the functioning of artificial intelligence, which emphasises the role of the algorithm in the final decision: it is essentially the significance of the algorithm, that is, the possibility of a human intervention as a control function, aimed at mitigating the discriminatory effects of the decision model developed by the software.
2022
2022
https://www.iustel.com/v2/revistas/detalle_revista.asp?id_noticia=424884&d=1
Artificial intelligence, algorithmic decision, discrimination
Zuddas, P.
File in questo prodotto:
File Dimensione Formato  
DECISIÓN ALGORÍTMICA Y PRINCIPIO DE IGUALDAD.pdf

non disponibili

Tipologia: Versione Editoriale (PDF)
Licenza: Copyright dell'editore
Dimensione 668.65 kB
Formato Adobe PDF
668.65 kB Adobe PDF   Visualizza/Apri   Richiedi una copia

I documenti in IRIS sono protetti da copyright e tutti i diritti sono riservati, salvo diversa indicazione.

Utilizza questo identificativo per citare o creare un link a questo documento: https://hdl.handle.net/11383/2136024
Citazioni
  • ???jsp.display-item.citation.pmc??? ND
  • Scopus ND
  • ???jsp.display-item.citation.isi??? ND
social impact