Privacy-Preserving Federated Learning to Improve AI Cybersecurity
Contenido principal del artículo
Resumen
Federated Learning (FL) has become a transformative paradigm for distributed and privacy-preserving machine learning, enabling multiple participants to collaboratively train models without centralizing their raw data. Despite its inherent privacy advantages, recent studies have exposed several critical security vulnerabilities in FL ecosystems, including model poisoning, backdoor insertion, data reconstruction, and membership inference attacks [1]–[4]. These threats can compromise both model integrity and data confidentiality, making the protection of federated environments an urgent research priority. In this paper, we present Secure-ML-FL, a comprehensive machine-learning-driven defense framework that strengthens federated learning security through three integrated mechanisms: (1) client-level anomaly detection, using meta-feature-based outlier identification to detect poisoned updates; (2) trust-weighted robust aggregation, which dynamically reduces the influence of low-trust or adversarial clients; and (3) meta-learning adaptation, enabling the system to evolve against new attack patterns and data distribution shifts [5]–[8]. To validate our approach, we evaluate Secure-ML-FL on benchmark network intrusion and cybersecurity datasets, namely CICIDS2017 and UNSW-NB15, under varying non-IID and adversarial settings. The results demonstrate that Secure-ML-FL achieves an average reduction of 85% in poisoning success rate, while maintaining model accuracy within 2% of the baseline FedAvg model and reducing communication overhead by approximately 7%. Comparative analysis against state-of-the-art defenses such as Krum, Trimmed Mean, and FedProx confirms the superior robustness and adaptability of our framework in both cross-silo and cross-device federated environments. The findings underscore that integrating intelligent ML-based detection mechanisms with robust aggregation offers a viable path toward trustworthy and secure federated intelligence systems for next-generation privacy-sensitive applications [9]–[12].