Direct Multi-Turn Preference Optimization for Language Agents

Abstract

Adapting Large Language Models (LLMs) for agent tasks is critical in developing language agents. Direct Preference Optimization (DPO) is a promising technique for this adaptation with the alleviation of compounding errors, offering a means to directly optimize Reinforcement Learning (RL) objectives. However, applying DPO to multi-turn tasks presents challenges due to the inability to cancel the partition function. Overcoming this obstacle involves making the partition function independent of the current state and addressing length disparities between preferred and dis-preferred trajectories. In this light, we replace the policy constraint with the state-action occupancy measure constraint in the RL objective and add length normalization to the Bradley-Terry model, yielding a novel loss function named DMPO for multi-turn agent tasks with theoretical explanations. Extensive experiments on three multi-turn agent task datasets confirm the effectiveness and superiority of the DMPO loss.

Publication
In EMNLP 2024

Citation:

@inproceedings{shi24emnlp,
  author={Wentao Shi and
                  Mengqi Yuan and
                  Junkang Wu and
                  Qifan Wang and
                  Fuli Feng},
  title={Direct Multi-Turn Preference Optimization for Language Agents},
  booktitle={EMNLP},
  year={2024},
}
Wentao Shi
Wentao Shi
石文焘
Junkang Wu
Junkang Wu
吴俊康
Fuli Feng
Fuli Feng
冯福利 教授