In this talk, I will delve into machine unlearning (MU), a critical process for removing specific examples from machine learning models to comply with data regulations. To bridge the gap between exact and approximate unlearning, I will approach the MU problem from a novel model-based perspective: model sparsification through weight pruning. By conducting theoretical analysis and practical experiments, I will demonstrate the substantial improvements achieved by incorporating model sparsity to enhance multi-criteria unlearning while maintaining efficiency. Additionally, I will showcase the practical impacts of sparsity-aided MU in addressing challenges such as defending against backdoor attacks and augmenting transfer learning through coreset selection. Beyond the weight sparsity, I will also introduce the concept of 'weight saliency' in MU, drawing parallels with input saliency in model explanation. This innovation directs MU's attention toward specific model weights rather than the entire model, improving effectiveness and efficiency. I will show that in preventing conditional diffusion models from generating harmful images, saliency-aware unlearning achieves nearly 100% unlearning accuracy, outperforming current state-of-the-art concept-erased diffusion models.
Dr. Sijia Liu is currently an Assistant Professor at the Department of Computer Science and Engineering, Michigan State University, and an Affiliated Professor at the MIT-IBM Watson AI Lab, IBM Research. His research expertise lies in the fields of machine learning, optimization, and signal processing, with a specific focus on trustworthy and scalable ML. He received the Best Paper Runner-Up Award at the Conference on Uncertainty in Artificial Intelligence (UAI) in 2022 and the Best Student Paper Award at the 42nd IEEE International Conference on Acoustics, Speech and Signal Processing (ICASSP) in 2017. He was the leading organizer of the 1st and 2nd workshops on New Frontiers in Adversarial Machine Learning (AdvML-Frontiers) at ICML 2022 and 2023.