Counterfactual Debating with Preset Stances for Hallucination Elimination of LLMs

Abstract

Large Language Models (LLMs) excel in various natural language processing tasks but struggle with hallucination issues. Existing solutions have considered utilizing LLMs’ inherent reasoning abilities to alleviate hallucination, such as self-correction and diverse sampling methods. However, these methods often overtrust LLMs’ initial answers due to inherent biases. The key to alleviating this issue lies in overriding LLMs’ inherent biases for answer inspection. To this end, we propose a CounterFactual Multi-Agent Debate (CFMAD) framework. CFMAD presets the stances of LLMs to override their inherent biases by compelling LLMs to generate justifications for a predetermined answer’s correctness. The LLMs with different predetermined stances are engaged with a skeptical critic for counterfactual debate on the rationality of generated justifications. Finally, the debate process is evaluated by a third-party judge to determine the final answer. Extensive experiments on four datasets of three tasks demonstrate the superiority of CFMAD over existing methods.

Publication
In COLING 2025

Citation:

@inproceedings{cfmad,
  author       = {Yi Fang and
                  Moxin Li and
                  Wenjie Wang and
                  Lin Hui and
                  Fuli Feng},
  title        = {Counterfactual Debating with Preset Stances for Hallucination Elimination
                  of LLMs},
  booktitle    = {{COLING}},
  pages        = {10554--10568},
  publisher    = {Association for Computational Linguistics},
  year         = {2025}
}
Yi Fang
Yi Fang
方羿
Wenjie Wang
Wenjie Wang
王文杰 教授
Fuli Feng
Fuli Feng
冯福利 教授