This innovative practice work in progress paper tackles the problem of unfairness and bias in software, that recently has emerged in countless cases. This unfairness can be present in the way software makes its decision or can limit the software functionalities to work only with certain populations. Well-known examples of this problem are the Microsoft Kinect facial recognition algorithm, which does not work properly with darker skin players, and the software used in 2016 by Amazon.com to determine the parts of the United States to which offer free same-day delivery that made decisions that prevented minority neighborhoods from participating in the program. The reasons behind these phenomena have often roots in the fact that software is created by humans who are biased and live in biased and non-inclusive environments. Recent research from the software engineering community is starting to tackle this problem at many levels from requirements analysis to the new automatic fairness testing technique (proposed first at FSE 2017 conference). However, research in bias of software is still a very undervalued and rarely discussed problem as software is often seen as a product immune to bias and non-inclusivity. This problem will be not addressed unless software engineering educators start to include this notion as a first-class problem in their foundation courses to future generation of scholars. In this work, we propose a set of bias-aware guidelines and taxonomy on how to flesh out this problem and possible solutions to it in software engineering curricula.

Bias-aware guidelines and fairness-preserving Taxonomy in software engineering education

Spoletini P.;
2019

Abstract

This innovative practice work in progress paper tackles the problem of unfairness and bias in software, that recently has emerged in countless cases. This unfairness can be present in the way software makes its decision or can limit the software functionalities to work only with certain populations. Well-known examples of this problem are the Microsoft Kinect facial recognition algorithm, which does not work properly with darker skin players, and the software used in 2016 by Amazon.com to determine the parts of the United States to which offer free same-day delivery that made decisions that prevented minority neighborhoods from participating in the program. The reasons behind these phenomena have often roots in the fact that software is created by humans who are biased and live in biased and non-inclusive environments. Recent research from the software engineering community is starting to tackle this problem at many levels from requirements analysis to the new automatic fairness testing technique (proposed first at FSE 2017 conference). However, research in bias of software is still a very undervalued and rarely discussed problem as software is often seen as a product immune to bias and non-inclusivity. This problem will be not addressed unless software engineering educators start to include this notion as a first-class problem in their foundation courses to future generation of scholars. In this work, we propose a set of bias-aware guidelines and taxonomy on how to flesh out this problem and possible solutions to it in software engineering curricula.
File in questo prodotto:
Non ci sono file associati a questo prodotto.

I documenti in IRIS sono protetti da copyright e tutti i diritti sono riservati, salvo diversa indicazione.

Utilizza questo identificativo per citare o creare un link a questo documento: http://hdl.handle.net/11383/2105584
 Attenzione

Attenzione! I dati visualizzati non sono stati sottoposti a validazione da parte dell'ateneo

Citazioni
  • ???jsp.display-item.citation.pmc??? ND
  • Scopus 0
  • ???jsp.display-item.citation.isi??? 0
social impact