Papers
arxiv:2407.01920

To Forget or Not? Towards Practical Knowledge Unlearning for Large Language Models

Published on Jul 2
· Submitted by Ningyu on Jul 3

Abstract

Large Language Models (LLMs) trained on extensive corpora inevitably retain sensitive data, such as personal privacy information and copyrighted material. Recent advancements in knowledge unlearning involve updating LLM parameters to erase specific knowledge. However, current unlearning paradigms are mired in vague forgetting boundaries, often erasing knowledge indiscriminately. In this work, we introduce KnowUnDo, a benchmark containing copyrighted content and user privacy domains to evaluate if the unlearning process inadvertently erases essential knowledge. Our findings indicate that existing unlearning methods often suffer from excessive unlearning. To address this, we propose a simple yet effective method, MemFlex, which utilizes gradient information to precisely target and unlearn sensitive parameters. Experimental results show that MemFlex is superior to existing methods in both precise knowledge unlearning and general knowledge retaining of LLMs. Code and dataset will be released at https://github.com/zjunlp/KnowUnDo.

Community

Paper author Paper submitter
edited Jul 3

We introduce KnowUnDo, a benchmark to assess if unlearning processes unintentionally erase essential knowledge, focusing on copyrighted content and user privacy domains. Our proposed method, MemFlex, leverages gradient information to selectively unlearn sensitive parameters, demonstrating superior performance in maintaining general knowledge while precisely unlearning sensitive information in large language models.

Hi @Ningyu congrats on this work! Are you planning on making your dataset available on the Hugging Face hub?

If yes, it could also be linked to this paper page as explained here: https://huggingface.co/docs/hub/en/datasets-cards#linking-a-paper

·
Paper author

yes, we plan to release the dataset on the HF hub.

Great!

Sign up or log in to comment

Models citing this paper 0

No model linking this paper

Cite arxiv.org/abs/2407.01920 in a model README.md to link it from this page.

Datasets citing this paper 1

Spaces citing this paper 0

No Space linking this paper

Cite arxiv.org/abs/2407.01920 in a Space README.md to link it from this page.

Collections including this paper 7