Decoding Matters: Addressing Amplification Bias and Homogeneity Issue for LLM-based Recommendation

Abstract

Adapting Large Language Models (LLMs) for recommendation requires careful consideration of the decoding process, given the inherent differences between generating items and natural language. Existing approaches often directly apply LLMs’ original decoding methods. However, we find these methods encounter significant challenges: 1) amplification bias—where standard length normalization inflates scores for items containing tokens with generation probabilities close to 1 (termed ghost tokens), and 2) homogeneity issue—generating multiple similar or repetitive items for a user. To tackle these challenges, we introduce a new decoding approach named Debiasing-Diversifying Decoding (D3). D3 disables length normalization for ghost tokens to alleviate amplification bias, and it incorporates a text-free assistant model to encourage tokens less frequently generated by LLMs for counteracting recommendation homogeneity. Extensive experiments on realworld datasets demonstrate the method’s effectiveness in enhancing accuracy and diversity.

Publication
In EMNLP 2024

Citation:

@inproceedings{bao24emnlp,
  author       = {Keqin Bao and
                  Jizhi Zhang and
                  Yang Zhang and
                  Xinyue Huo and
                  Chong Chen and
                  Fuli Feng},
  title        = {Decoding Matters: Addressing Amplification Bias and Homogeneity Issue
                  for LLM-based Recommendation},
  booktitle={EMNLP},
  year={2024},
}
Keqin Bao
Keqin Bao
鲍克勤
Jizhi Zhang
Jizhi Zhang
张及之
Fuli Feng
Fuli Feng
冯福利 教授