This innovative practice work in progress paper tackles the problem of unfairness and bias in software, that recently has emerged in countless cases. This unfairness can be present in the way software makes its decision or can limit the software functionalities to work only with certain populations. Well-known examples of this problem are the Microsoft Kinect facial recognition algorithm, which does not work properly with darker skin players, and the software used in 2016 by Amazon.com to determine the parts of the United States to which offer free same-day delivery that made decisions that prevented minority neighborhoods from participating in the program. The reasons behind these phenomena have often roots in the fact that software is created by humans who are biased and live in biased and non-inclusive environments. Recent research from the software engineering community is starting to tackle this problem at many levels from requirements analysis to the new automatic fairness testing technique (proposed first at FSE 2017 conference). However, research in bias of software is still a very undervalued and rarely discussed problem as software is often seen as a product immune to bias and non-inclusivity. This problem will be not addressed unless software engineering educators start to include this notion as a first-class problem in their foundation courses to future generation of scholars. In this work, we propose a set of bias-aware guidelines and taxonomy on how to flesh out this problem and possible solutions to it in software engineering curricula.

Bias-aware guidelines and fairness-preserving Taxonomy in software engineering education

Spoletini P.;
2019-01-01

Abstract

This innovative practice work in progress paper tackles the problem of unfairness and bias in software, that recently has emerged in countless cases. This unfairness can be present in the way software makes its decision or can limit the software functionalities to work only with certain populations. Well-known examples of this problem are the Microsoft Kinect facial recognition algorithm, which does not work properly with darker skin players, and the software used in 2016 by Amazon.com to determine the parts of the United States to which offer free same-day delivery that made decisions that prevented minority neighborhoods from participating in the program. The reasons behind these phenomena have often roots in the fact that software is created by humans who are biased and live in biased and non-inclusive environments. Recent research from the software engineering community is starting to tackle this problem at many levels from requirements analysis to the new automatic fairness testing technique (proposed first at FSE 2017 conference). However, research in bias of software is still a very undervalued and rarely discussed problem as software is often seen as a product immune to bias and non-inclusivity. This problem will be not addressed unless software engineering educators start to include this notion as a first-class problem in their foundation courses to future generation of scholars. In this work, we propose a set of bias-aware guidelines and taxonomy on how to flesh out this problem and possible solutions to it in software engineering curricula.
2019
Proceedings - Frontiers in Education Conference, FIE
48th Frontiers in Education Conference, FIE 2018
usa
2018
File in questo prodotto:
Non ci sono file associati a questo prodotto.

I documenti in IRIS sono protetti da copyright e tutti i diritti sono riservati, salvo diversa indicazione.

Utilizza questo identificativo per citare o creare un link a questo documento: https://hdl.handle.net/11383/2105584
 Attenzione

L'Ateneo sottopone a validazione solo i file PDF allegati

Citazioni
  • ???jsp.display-item.citation.pmc??? ND
  • Scopus 3
  • ???jsp.display-item.citation.isi??? 0
social impact