Larger or Smaller Reward Margins to Select Preferences for Alignment?

Abstract

Preference learning is critical for aligning large language models (LLMs) with human values, with the quality of preference datasets playing a crucial role in this process. While existing metrics primarily assess data quality based on either explicit or implicit reward margins, they often provide contradictory evaluations for the same data. To address this issue, we introduce the alignment potential metric, which quantifies the gap from the model’s current implicit reward margin to the target explicit reward margin, thereby estimating the model’s potential to align with the preference data. Empirical results demonstrate that training on data selected by this metric consistently enhances alignment performance, surpassing existing metrics across different base models and optimization objectives. Furthermore, our method extends to self-play data generation frameworks, where the metric is used to identify high-quality data within the self-generated content by LLMs. Under this data generation scenario, our method surpasses current state-of-the-art (SOTA) results across various training settings and demonstrates continuous improvements in alignment performance as dataset size and training iterations increase.

Publication
In ICML 2025

Citation:

@inproceedings{
huang2025metric,
title={Larger or Smaller Reward Margins to Select Preferences for Alignment?},
author={Huang, Kexin and Wu, Junkang and Chen, Ziqian and Wang, Xue and Gao, Jinyang and Ding, Bolin and Wu, Jiancan and He, Xiangnan and Wang, Xiang},
booktitle={Forty-second International Conference on Machine Learning},
year={2025},
}
Kexin Huang
Kexin Huang
黄科鑫
Junkang Wu
Junkang Wu
吴俊康
Jiancan Wu
Jiancan Wu
吴剑灿 副研究员
Xiangnan He
Xiangnan He
何向南 教授
Xiang Wang
Xiang Wang
王翔 教授