Institutional Repository of Key Laboratory of Behavioral Science, CAS
ESRL: Efficient Sampling-based Reinforcement Learning for Sequence Generation | |
Chenglong Wang1![]() ![]() | |
第一作者 | Chenglong Wang |
通讯作者邮箱 | xiaotong@mail.neu.edu.cn ; zhujingbo@mail.neu.edu.cn |
心理所单位排序 | 3 |
摘要 | Applying Reinforcement Learning (RL) to sequence generation models enables the direct optimization of long-term rewards (e.g., BLEU and human feedback), but typically requires large-scale sampling over a space of action sequences. This is a computational challenge as presented by the practice of sequence generation problems, such as machine translation, where we often deal with a large action space (e.g., a vocabulary) and a long action sequence (e.g., a translation). In this work, we introduce two-stage sampling and dynamic sampling approaches to improve the sampling efficiency during training sequence generation models via RL. We experiment with our approaches on the traditional sequence generation tasks, including machine translation and abstractive summarization. Furthermore, we evaluate our approaches in RL from human feedback (RLHF) through training a large language model using the reward model. Experimental results show that the efficient sampling-based RL, referred to as ESRL, can outperform all baselines in terms of both training efficiency and memory consumption. Notably, ESRL yields consistent performance gains over the strong REINFORCE, minimum risk training, and proximal policy optimization methods. |
2023 | |
语种 | 英语 |
DOI | 10.48550/arXiv.2308.02223 |
发表期刊 | arXiv
![]() |
ISSN | 2159-5399 |
卷号 | 38期号:4页码:19107-19115 |
期刊论文类型 | 综述 |
收录类别 | EI |
引用统计 | |
文献类型 | 期刊论文 |
条目标识符 | https://ir.psych.ac.cn/handle/311026/45254 |
专题 | 中国科学院行为科学重点实验室 |
作者单位 | 1.School of Computer Science and Engineering, Northeastern University, Shenyang, China 2.NiuTrans Research, Shenyang, China 3.CAS Key Laboratory of Behavioral Science, Institute of Psychology, CAS, Beijing, China |
推荐引用方式 GB/T 7714 | Chenglong Wang,Hang Zhou,Yimin Hu,et al. ESRL: Efficient Sampling-based Reinforcement Learning for Sequence Generation[J]. arXiv,2023,38(4):19107-19115. |
APA | Chenglong Wang.,Hang Zhou.,Yimin Hu.,Yifu Huo.,Bei Li.,...&jingbo Zhu.(2023).ESRL: Efficient Sampling-based Reinforcement Learning for Sequence Generation.arXiv,38(4),19107-19115. |
MLA | Chenglong Wang,et al."ESRL: Efficient Sampling-based Reinforcement Learning for Sequence Generation".arXiv 38.4(2023):19107-19115. |
条目包含的文件 | ||||||
文件名称/大小 | 文献类型 | 版本类型 | 开放类型 | 使用许可 | ||
ESRL_ Efficient Samp(276KB) | 期刊论文 | 作者接受稿 | 开放获取 | CC BY-NC-SA | 浏览 下载 |
除非特别说明,本系统中所有内容都受版权保护,并保留所有权利。
修改评论