03/24/2025
By Shixiong Li

The Miner School of Computer and Information Sciences, Kennedy College of Sciences, invites you to a Dissertation Proposal defense in Computer and Information Science by Shixiong Li titled "Towards Robust Learning Systems: Investigating Data Poisoning Attacks and Defenses in Diverse Learning Paradigms."

Date: Thursday, April 3, 2025
Time: 10 - 11:30 a.m.
Location: Dandeneau Hall 309
And via Zoom 

Committee Members:
Ian Chen (Advisor), Assistant Professor, Miner School of Computer and Information Sciences, UMass Lowell.
Xinwen Fu, Professor, Director, iSAFER Center, Miner School of Computer & Information Sciences, Kennedy College of Sciences, UMass Lowell
Benyuan Liu, Professor, Miner School of Computer and Information Sciences, UMass Center for Digital Health (CDH), Computer Networking Lab, CHORDS, UMass Lowell
Ning (Nicole) Wang, Assistant Professor, Department of Computer Science and Engineering, University of South Florida

Abstract

Poisoning attacks represent a critical threat to modern machine learning systems — especially as deep neural networks (DNNs) are increasingly deployed in sensitive applications like medical image diagnosis. By inserting a small number of malicious samples into the training dataset, an adversary can implant hidden backdoors that cause victim models to misclassify inputs when specific triggers are present, without significantly degrading performance on benign data. In this proposal, I will introduce two novel poisoning attacks that advance the landscape in this field.

The first method, OpenTrigger, leverages a dynamic pool for trigger selection. Instead of relying on a single, static trigger, our approach constructs a diverse pool of candidate triggers through optimization techniques such as Particle Swarm Optimization (PSO). OpenTrigger not only maximizes attack success rate (ASR) across various input conditions but also enhances attack stealthiness by evading detection methods that are based on identifying fixed patterns only.

The second method, AutoPoison, uncovers a previously overlooked vulnerability within standard image preprocessing pipelines. We demonstrate that the inherent operations of common image resizing algorithms (e.g., bilinear, bicubic, Lanczos) can induce subtle, semantically null artifacts that are nonetheless learnable by DNNs. As a result, AutoPoison embeds imperceptible backdoors into models — using only the default resizing step — thereby bypassing the need for overt modifications to the input data.

Extensive experiments on various datasets such as CIFAR-10, CIFAR-100, GTSRB and TinyImageNet confirm that both OpenTrigger and AutoPoison achieve high ASRs while preserving overall model accuracy of benign data. Moreover, the evaluations indicate that these attack frameworks exhibit strong resilience against state-of-the-art defenses, including robust training schemes and backdoor detection.

In summary, our investigations advance the understanding of poisoning attacks, thus providing a solid foundation for developing next-generation robust learning systems.