Customizing In-context Learning for Dynamic Interest Adaption in LLM-based Recommendation

Abstract

Frequently updating Large Language Model (LLM)-based recommender systems to adapt to dynamic user interests—as done for traditional ones—is impractical due to high training costs, even with acceleration methods. This work explores the possibility of adapting the model to dynamic user interests without any model-level updates via In-context Learning (ICL), which enables adaptation through few-shot examples within input prompts. While using recent user interactions as ICL demonstrations offers a potential solution for dynamic interest adaptation, existing LLM-based recommenders face critical limitations: recommendation-specific tuning often diminishes the model’s in-context learning ability, and the original LLM’s ICL lacks task-specific optimization for recommendations. To bridge this gap, we introduce RecICL, a framework that establishes recommendation-oriented in-context learning by structuring recent user interactions and current inputs into ICL formats. RecICL achieves dual objectives: (1) preserving fundamental ICL capabilities during recommendation adaptation and (2) dynamically capturing user preference evolution through the most recent interactions. Extensive experiments across multiple benchmarks demonstrate RecICL’s superior performance, achieving better results without model updates.

Publication
In ACL 2025

Citation:

@inproceedings{bao2025recicl,
  title		=	{Customizing In-context Learning for Dynamic Interest Adaption in LLM-based Recommendation},
  author	=	{Keqin Bao, Ming Yan, Yang Zhang, Jizhi Zhang, Wenjie Wang, Fuli Feng, and Xiangnan He},
  booktitle	=	{ACL 2025},
  year 		= 	{2025}
}
Keqin Bao
Keqin Bao
鲍克勤
Jizhi Zhang
Jizhi Zhang
张及之
Wenjie Wang
Wenjie Wang
王文杰 教授
Fuli Feng
Fuli Feng
冯福利 教授
Xiangnan He
Xiangnan He
何向南 教授