Unified Parameter-Efficient Unlearning for LLMs

Abstract

The advent of Large Language Models (LLMs) has revolutionized natural language processing, enabling advanced understanding and reasoning capabilities across a variety of tasks. Fine-tuning these models for specific domains, particularly through Parameter-Efficient Fine-Tuning (PEFT) strategies like LoRA, has become a prevalent practice due to its efficiency. However, this raises significant privacy and security concerns, as models may inadvertently retain and disseminate sensitive or undesirable information. To address these issues, we introduce a novel instance-wise unlearning framework, LLMEraser, which systematically categorizes unlearning tasks and applies precise parameter adjustments using influence functions. Unlike traditional unlearning techniques that are often limited in scope and require extensive retraining, LLMEraser is designed to handle a broad spectrum of unlearning tasks without compromising model performance. Extensive experiments on benchmark datasets demonstrate that LLMEraser excels in efficiently managing various unlearning scenarios while maintaining the overall integrity and efficacy of the models.

Publication
In ICLR 2025

Citation:

@article{DiffGAD,
  author       = {Jinghan Li and
                  Yuan Gao and
                  Jinda Lu and
                  Junfeng Fang and
                  Congcong Wen and
                  Hui Lin and
                  Xiang Wang},
  title        = {DiffGAD: {A} Diffusion-based Unsupervised Graph Anomaly Detector},
  journal      = {CoRR},
  volume       = {abs/2410.06549},
  year         = {2024}
}
Chenlu Ding
Chenlu Ding
丁陈璐
Jiancan Wu
Jiancan Wu
吴剑灿 副研究员
Jinda Lu
Jinda Lu
卢金达
Xiang Wang
Xiang Wang
王翔 教授
Xiangnan He
Xiangnan He
何向南 教授