CrAM: Credibility-Aware Attention Modification in LLMs for Combating Misinformation in RAG

Abstract

Retrieval-Augmented Generation (RAG) can alleviate hallucinations of Large Language Models (LLMs) by referencing external documents. However, the misinformation in external documents may mislead LLMs’ generation. To address this issue, we explore the task of ‘credibility-aware RAG’, in which LLMs automatically adjust the influence of retrieved documents based on their credibility scores to counteract misinformation. To this end, we introduce a plug-and-play method named Credibility-aware Attention Modification (CrAM). CrAM identifies influential attention heads in LLMs and adjusts their attention weights based on the credibility of the documents, thereby reducing the impact of low-credibility documents. Experiments on Natual Questions and TriviaQA using Llama2-13B, Llama3-8B, and Qwen1.5-7B show that CrAM improves the RAG performance of LLMs against misinformation pollution by over 20%, even surpassing supervised fine-tuning methods.

Publication
In AAAI 2025

Citation:

@article{deng2024cram,
  title={CrAM: Credibility-Aware Attention Modification in LLMs for Combating Misinformation in RAG},
  author={Deng, Boyi and Wang, Wenjie and Zhu, Fengbin and Wang, Qifan and Feng, Fuli},
  journal={arXiv preprint arXiv:2406.11497},
  year={2024}
}
Boyi Deng
Boyi Deng
邓博以
Wenjie Wang
Wenjie Wang
王文杰 教授
Fuli Feng
Fuli Feng
冯福利 教授