In the era of big data, computationally efficient and privacy-aware solutions for large-scale Machine Learning problems become crucial, especially in scenarios where large amounts of data are stored in different locations and owned by multiple entities. One of the most recent decentralized and computationally scalable approaches to address the processing of large amounts of data in a distributed fashion is Federated Learning. However, it poses multiple security and privacy concerns. Malicious clients or aggregators within the Federated Learning ecosystem can exploit vulnerabilities to get sensitive information from other participating entities, potentially leading to data breaches. In this work, we present a novel protocol that combines advanced techniques to address the common privacy key requirements in a Federated Learning scenario. The final objective is to develop a new protocol that ensures a privacy-preserving environment for the data used in Federated Learning, with a limited impact on the efficiency of the learning operations and maintaining the underlying models' accuracy. We first introduce a protocol that uses multiple well-known techniques to add a security layer over an existing Federated Learning model to preserve its accuracy and guarantee the privacy of the client's data. However, its performance gap limits its application only in a restricted number of scenarios where accuracy and privacy are preferred over efficiency. Therefore, we present a second protocol that exploits new state-of-the-art techniques to address the encountered limitations. It achieves high execution performance, preserves the accuracy of the underlying Federated Learning model and also secures the Federated Learning system guaranteeing the privacy of the model data.

Secure Privacy-Preserving Techniques for Federated Machine Learning / Simone Bottoni , 2024 Dec 21. 36. ciclo

Secure Privacy-Preserving Techniques for Federated Machine Learning

Simone Bottoni
2024-12-21

Abstract

In the era of big data, computationally efficient and privacy-aware solutions for large-scale Machine Learning problems become crucial, especially in scenarios where large amounts of data are stored in different locations and owned by multiple entities. One of the most recent decentralized and computationally scalable approaches to address the processing of large amounts of data in a distributed fashion is Federated Learning. However, it poses multiple security and privacy concerns. Malicious clients or aggregators within the Federated Learning ecosystem can exploit vulnerabilities to get sensitive information from other participating entities, potentially leading to data breaches. In this work, we present a novel protocol that combines advanced techniques to address the common privacy key requirements in a Federated Learning scenario. The final objective is to develop a new protocol that ensures a privacy-preserving environment for the data used in Federated Learning, with a limited impact on the efficiency of the learning operations and maintaining the underlying models' accuracy. We first introduce a protocol that uses multiple well-known techniques to add a security layer over an existing Federated Learning model to preserve its accuracy and guarantee the privacy of the client's data. However, its performance gap limits its application only in a restricted number of scenarios where accuracy and privacy are preferred over efficiency. Therefore, we present a second protocol that exploits new state-of-the-art techniques to address the encountered limitations. It achieves high execution performance, preserves the accuracy of the underlying Federated Learning model and also secures the Federated Learning system guaranteeing the privacy of the model data.
21-dic-2024
Federated Learning, Aggregator, Verifiability, Privacy, Elliptic Curve, Pedersen Commitment.
Secure Privacy-Preserving Techniques for Federated Machine Learning / Simone Bottoni , 2024 Dec 21. 36. ciclo
File in questo prodotto:
Non ci sono file associati a questo prodotto.

I documenti in IRIS sono protetti da copyright e tutti i diritti sono riservati, salvo diversa indicazione.

Utilizza questo identificativo per citare o creare un link a questo documento: https://hdl.handle.net/11383/2174191
 Attenzione

L'Ateneo sottopone a validazione solo i file PDF allegati

Citazioni
  • ???jsp.display-item.citation.pmc??? ND
  • Scopus ND
  • ???jsp.display-item.citation.isi??? ND
social impact