TY - GEN
T1 - Fairfed
T2 - 2020 IEEE Applied Imagery Pattern Recognition Workshop, AIPR 2020
AU - Habib Ur Rehman, Muhammad
AU - Mukhtar Dirir, Ahmed
AU - Salah, Khaled
AU - Svetinovic, Davor
N1 - Publisher Copyright:
© 2020 IEEE.
PY - 2021/5/10
Y1 - 2021/5/10
N2 - Federated learning (FL) is the rapidly developing machine learning technique that is used to perform collaborative model training over decentralized datasets. FL enables privacy-preserving model development whereby the datasets are scattered over a large set of data producers (i.e., devices and/or systems). These data producers train the learning models, encapsulate the model updates with differential privacy techniques, and share them to centralized systems for global aggregation. However, these centralized models are always prone to adversarial attacks (such as data-poisoning and model poisoning attacks) due to a large number of data producers. Hence, FL methods need to ensure fairness and high-quality model availability across all the participants in the underlying AI systems. In this paper, we propose a novel FL framework, called FairFed, to meet fairness and high-quality data requirements. The FairFed provides a fairness mechanism to detect adversaries across the devices and datasets in the FL network and reject their model updates. We use a Python-simulated FL framework to enable large-scale training over MNIST dataset. We simulate a cross-device model training settings to detect adversaries in the training network. We used TensorFlow Federated and Python to implement the fairness protocol, the deep neural network, and the outlier detection algorithm. We thoroughly test the proposed FairFed framework with random and uniform data distributions across the training network and compare our initial results with the baseline fairness scheme. Our proposed work shows promising results in terms of model accuracy and loss.
AB - Federated learning (FL) is the rapidly developing machine learning technique that is used to perform collaborative model training over decentralized datasets. FL enables privacy-preserving model development whereby the datasets are scattered over a large set of data producers (i.e., devices and/or systems). These data producers train the learning models, encapsulate the model updates with differential privacy techniques, and share them to centralized systems for global aggregation. However, these centralized models are always prone to adversarial attacks (such as data-poisoning and model poisoning attacks) due to a large number of data producers. Hence, FL methods need to ensure fairness and high-quality model availability across all the participants in the underlying AI systems. In this paper, we propose a novel FL framework, called FairFed, to meet fairness and high-quality data requirements. The FairFed provides a fairness mechanism to detect adversaries across the devices and datasets in the FL network and reject their model updates. We use a Python-simulated FL framework to enable large-scale training over MNIST dataset. We simulate a cross-device model training settings to detect adversaries in the training network. We used TensorFlow Federated and Python to implement the fairness protocol, the deep neural network, and the outlier detection algorithm. We thoroughly test the proposed FairFed framework with random and uniform data distributions across the training network and compare our initial results with the baseline fairness scheme. Our proposed work shows promising results in terms of model accuracy and loss.
KW - Data Quality
KW - Deep Learning
KW - Fairness
KW - Federated Learning
KW - Model Development
KW - Outlier Detection
UR - https://www.scopus.com/pages/publications/85106195073
U2 - 10.1109/AIPR50011.2020.9425266
DO - 10.1109/AIPR50011.2020.9425266
M3 - Conference contribution
AN - SCOPUS:85106195073
T3 - Proceedings - Applied Imagery Pattern Recognition Workshop
BT - 2020 IEEE Applied Imagery Pattern Recognition Workshop, AIPR 2020
PB - Institute of Electrical and Electronics Engineers Inc.
Y2 - 13 October 2020 through 15 October 2020
ER -