PSYCH OpenIR  > 中国科学院行为科学重点实验室
ESRL: Efficient Sampling-based Reinforcement Learning for Sequence Generation
Chenglong Wang1; Hang Zhou1; Yimin Hu1; Yifu Huo1; Bei Li1; Tongran Liu3; Tong Xiao1,2; jingbo Zhu1,2
第一作者Chenglong Wang
通讯作者邮箱xiaotong@mail.neu.edu.cn ; zhujingbo@mail.neu.edu.cn
心理所单位排序3
摘要

Applying Reinforcement Learning (RL) to sequence generation models enables the direct optimization of long-term rewards (e.g., BLEU and human feedback), but typically requires large-scale sampling over a space of action sequences. This is a computational challenge as presented by the practice of sequence generation problems, such as machine translation, where we often deal with a large action space (e.g., a vocabulary) and a long action sequence (e.g., a translation). In this work, we introduce two-stage sampling and dynamic sampling approaches to improve the sampling efficiency during training sequence generation models via RL. We experiment with our approaches on the traditional sequence generation tasks, including machine translation and abstractive summarization. Furthermore, we evaluate our approaches in RL from human feedback (RLHF) through training a large language model using the reward model. Experimental results show that the efficient sampling-based RL, referred to as ESRL, can outperform all baselines in terms of both training efficiency and memory consumption. Notably, ESRL yields consistent performance gains over the strong REINFORCE, minimum risk training, and proximal policy optimization methods.

2023
语种英语
DOI10.48550/arXiv.2308.02223
发表期刊arXiv
ISSN2159-5399
卷号38期号:17页码:19107-19115
期刊论文类型综述
收录类别EI
引用统计
文献类型期刊论文
条目标识符http://ir.psych.ac.cn/handle/311026/45254
专题中国科学院行为科学重点实验室
作者单位1.School of Computer Science and Engineering, Northeastern University, Shenyang, China
2.NiuTrans Research, Shenyang, China
3.CAS Key Laboratory of Behavioral Science, Institute of Psychology, CAS, Beijing, China
推荐引用方式
GB/T 7714
Chenglong Wang,Hang Zhou,Yimin Hu,et al. ESRL: Efficient Sampling-based Reinforcement Learning for Sequence Generation[J]. arXiv,2023,38(17):19107-19115.
APA Chenglong Wang.,Hang Zhou.,Yimin Hu.,Yifu Huo.,Bei Li.,...&jingbo Zhu.(2023).ESRL: Efficient Sampling-based Reinforcement Learning for Sequence Generation.arXiv,38(17),19107-19115.
MLA Chenglong Wang,et al."ESRL: Efficient Sampling-based Reinforcement Learning for Sequence Generation".arXiv 38.17(2023):19107-19115.
条目包含的文件
文件名称/大小 文献类型 版本类型 开放类型 使用许可
ESRL_ Efficient Samp(276KB)期刊论文出版稿限制开放CC BY-NC-SA请求全文
个性服务
推荐该条目
保存到收藏夹
查看访问统计
导出为Endnote文件
谷歌学术
谷歌学术中相似的文章
[Chenglong Wang]的文章
[Hang Zhou]的文章
[Yimin Hu]的文章
百度学术
百度学术中相似的文章
[Chenglong Wang]的文章
[Hang Zhou]的文章
[Yimin Hu]的文章
必应学术
必应学术中相似的文章
[Chenglong Wang]的文章
[Hang Zhou]的文章
[Yimin Hu]的文章
相关权益政策
暂无数据
收藏/分享
所有评论 (0)
暂无评论
 

除非特别说明,本系统中所有内容都受版权保护,并保留所有权利。