11/20/2024
By Xingyu Lyu

The Miner School of Computer & Information Sciences invites you to join us for the upcoming CIS proposal defense by Xingyu Lyu.

Proposal Title: Toward Trustworthy Machine Learning Systems: Federated Learning and Large Models in Adversarial Settings
Ph.D. Candidate: Mr. Xingyu Lyu
Degree: Doctoral
Time: Thursday, December 5, 2024 , 10-11:30 a.m. EST.
Location: Dandeneau Hall, Room 309 (1 University Ave, Lowell, MA 01854) and via Zoom.

Committee:

Advisor: Yimin (Ian) Chen, Ph.D., Assistant Professor, Miner School of Computer & Information Sciences, University of Massachusetts Lowell.


Committee Members:

1. Dr. Benyuan Liu, Ph.D., Professor, Miner School of Computer & Information Sciences, UMass Center for Digital Health (CDH), Computer Networking Lab, CHORDS; University of Massachusetts Lowell.
2. Ning (Nicole) Wang, Ph.D., Assistant Professor, Department of Computer Science and Engineering at the University of South Florida (USF).
3. Sashank Narain, Ph.D. (Member), Assistant Professor, Miner School of Computer & Information Sciences,, University of Massachusetts Lowell.
4. Xinwen Fu, Ph.D., Professor, Director, iSAFER Center, Miner School of Computer & Information Sciences, Kennedy College of Sciences, University of Massachusetts Lowell.

Brief Abstract:

Federated Learning (FL) allows remote clients to collaboratively train a global model without sharing raw data, but its decentralized nature creates security risks, particularly from adversarial attacks.

Firstly, existing client selection in wireless FL relies heavily on channel conditions, yet it overlooks key vulnerabilities such as channel state information (CSI) forgery attacks. To address this gap, we introduce AirTrojan, a novel attack that manipulates selection probabilities to facilitate model poisoning in FL. This highlights the urgent need for enhanced security in wireless FL client selection.

Moreover, FL’s distributed nature makes it susceptible to backdoor attacks, where attackers can upload malicious updates to compromise the global model. Current defenses largely assume IID data across clients, resulting in limited effectiveness against backdoor threats in non-IID settings. To address this challenge, we propose FLBuff, a novel framework that introduces a buffer layer between benign and malicious updates using supervised contrastive learning. This approach enhances the resilience of FL models across diverse non-IID scenarios. To the best of our knowledge, FLBuff is the first systematic and comprehensive backdoor defense explicitly designed to tackle the unique challenges posed by non-IID settings.

To further reinforce FL defenses, we propose FLAG, an unsupervised method specifically designed to counter model poisoning attacks in non-IID environments. FLAG employs dynamic clustering and assigns trust scores based on multi-layer discrepancies, significantly outperforming state-of-the-art defenses against both targeted and untargeted attacks in diverse non-IID settings.

Finally, we outline future directions to advance the security of FL and large language models (LLMs), including optimizing client selection through contrastive learning, improving robustness in retrieval-augmented generation (RAG), and further developing resilient defenses for LLMs within FL frameworks.