Abstract
Federated Learning (FL) has gained significant prominence to overcome the issue of data silos in various domains. However, since its introduction FL has been confronted with the presence of Non-Independent and Identically Distributed (Non-IID) data, hindering its broad-scale adoption. In this paper, we present a novel method named Federated Split Averaging (FSA) to tackle the problem of Non-IID data. FSA solves the key challenge that classical FL fails to overcome, specifically accounting for real-world scenarios where data instances from certain classes are completely missing. Unlike conventional FL, where a cloud server blindly averages clients' model parameters, FSA classifies clients into strong and weak groups and aggregates their parameters separately. The spitted parameters are then used to compute dynamic penalty factors, which regularize clients' training and accelerate convergence. {Experimental results on real-world datasets demonstrated that the proposed method can significantly improve model accuracy in handling Non-IID data, achieving up to 7.23% improvement as compared to other state-of-the-art solutions.
| Original language | English |
|---|---|
| Pages (from-to) | 24018-24029 |
| Number of pages | 12 |
| Journal | IEEE Access |
| Volume | 14 |
| DOIs | |
| Publication status | Published - 6 Feb 2026 |
Keywords
- data distributions
- deep learning
- Federated learning
- heterogeneity
- non-IID
- regularization
ASJC Scopus subject areas
- General Computer Science
- General Materials Science
- General Engineering
Fingerprint
Dive into the research topics of 'Split averaging: bridging the heterogeneity gap in clients data for federated learning'. Together they form a unique fingerprint.Cite this
- APA
- Author
- BIBTEX
- Harvard
- Standard
- RIS
- Vancouver