A Bi-Step Grounding Paradigm for Large Language Models in Recommendation Systems

Abstract

As the focus on Large Language Models (LLMs) in the field of recommendation intensifies, the optimization of LLMs for recommendation purposes (referred to as LLM4Rec) assumes a crucial role in enhancing their recommendation performance. However, existing approaches for LLM4Rec often assess performance using restricted sets of candidates, which may not accurately reflect the models’ overall ranking capabilities. In this paper, our objective is to pursue LLM4Rec models with comprehensive ranking capacity and propose a two-step grounding framework known as BIGRec (Bi-step Grounding Paradigm for Recommendation). BIGRecm initially grounds LLMs to the recommendation space by fine-tuning them to generate meaningful tokens for items and subsequently identifies appropriate actual items that correspond to the generated tokens. By conducting extensive experiments on two datasets, we substantiate the superior performance, capacity for handling few-shot scenarios, and versatility across multiple domains exhibited by BIGRec. Furthermore, we observe that the marginal benefits derived from increasing the quantity of training samples are modest for BIGRec, implying that LLMs possess the limited capability to assimilate statistical information, such as popularity and collaborative filtering, due to their robust semantic priors. These findings also underline the efficacy of integrating diverse statistical information into the LLM4Rec framework, thereby pointing towards a potential avenue for future research.Finally, we conduct analysis utilizing BIGRec to explore the characteristics of incorporating recommendations into LLMs, thereby offering prospective insights for the advancement of the field.

Publication
In ACM Transactions on Recommender Systems

Citation:

@article{bigrec,
author = {Keqin Bao, and Jizhi Zhang, and Wenjie Wang, and Yang Zhang, and Zhengyi Yang, and Yanchen Luo, and Chong Chen, and Fuli Feng, and Qi Tian},
title = {A Bi-Step Grounding Paradigm for Large Language Models in Recommendation Systems},
year = {2025},
publisher = {Association for Computing Machinery},
address = {New York, NY, USA},
journal = {ACM Trans. Recomm. Syst.},
}
Keqin Bao
Keqin Bao
鲍克勤
Jizhi Zhang
Jizhi Zhang
张及之
Wenjie Wang
Wenjie Wang
王文杰 教授
Zhengyi Yang
Zhengyi Yang
杨正一
Yanchen Luo
Yanchen Luo
罗晏宸
Fuli Feng
Fuli Feng
冯福利 教授