PSYCH OpenIR  > 中国科学院行为科学重点实验室
A Trained Humanoid Robot can Perform Human-Like Crossmodal Social Attention and Conflict Resolution
Di Fu1,2,3; Fares Abawi3; Hugo Carneiro3; Matthias Kerzel3; Ziwei Chen1,2; Erik Strahl3; Xun Liu1,2; Stefan Wermter3
通讯作者邮箱fu, di ; liu, xun
摘要

To enhance human-robot social interaction, it is essential for robots to process multiple social cues in a complex real-world environment. However, incongruency of input information across modalities is inevitable and could be challenging for robots to process. To tackle this challenge, our study adopted the neurorobotic paradigm of crossmodal conflict resolution to make a robot express human-like social attention. A behavioural experiment was conducted on 37 participants for the human study. We designed a round-table meeting scenario with three animated avatars to improve ecological validity. Each avatar wore a medical mask to obscure the facial cues of the nose, mouth, and jaw. The central avatar shifted its eye gaze while the peripheral avatars generated sound. Gaze direction and sound locations were either spatially congruent or incongruent. We observed that the central avatar’s dynamic gaze could trigger crossmodal social attention responses. In particular, human performance was better under the congruent audio-visual condition than the incongruent condition. Our saliency prediction model was trained to detect social cues, predict audio-visual saliency, and attend selectively for the robot study. After mounting the trained model on the iCub, the robot was exposed to laboratory conditions similar to the human experiment. While the human performance was overall superior, our trained model demonstrated that it could replicate attention responses similar to humans.

关键词Crossmodal social attention Eye gaze Conflict processing Saliency prediction model iCub robot
2023
语种英语
发表期刊International Journal of Social Robotics
期刊论文类型实证研究
收录类别EI
文献类型期刊论文
条目标识符https://ir.psych.ac.cn/handle/311026/44792
专题中国科学院行为科学重点实验室
作者单位1.CAS Key Laboratory of Behavioral Science, Institute of Psychology, Beijing, China
2.Department of Psychology, University of Chinese Academy of Sciences, Beijing, China
3.Department of Informatics, University of Hamburg, Hamburg, Germany
第一作者单位中国科学院行为科学重点实验室
推荐引用方式
GB/T 7714
Di Fu,Fares Abawi,Hugo Carneiro,et al. A Trained Humanoid Robot can Perform Human-Like Crossmodal Social Attention and Conflict Resolution[J]. International Journal of Social Robotics,2023.
APA Di Fu.,Fares Abawi.,Hugo Carneiro.,Matthias Kerzel.,Ziwei Chen.,...&Stefan Wermter.(2023).A Trained Humanoid Robot can Perform Human-Like Crossmodal Social Attention and Conflict Resolution.International Journal of Social Robotics.
MLA Di Fu,et al."A Trained Humanoid Robot can Perform Human-Like Crossmodal Social Attention and Conflict Resolution".International Journal of Social Robotics (2023).
条目包含的文件
文件名称/大小 文献类型 版本类型 开放类型 使用许可
A Trained Humanoid R(4323KB)期刊论文作者接受稿开放获取CC BY-NC-SA浏览 下载
个性服务
推荐该条目
保存到收藏夹
查看访问统计
导出为Endnote文件
谷歌学术
谷歌学术中相似的文章
[Di Fu]的文章
[Fares Abawi]的文章
[Hugo Carneiro]的文章
百度学术
百度学术中相似的文章
[Di Fu]的文章
[Fares Abawi]的文章
[Hugo Carneiro]的文章
必应学术
必应学术中相似的文章
[Di Fu]的文章
[Fares Abawi]的文章
[Hugo Carneiro]的文章
相关权益政策
暂无数据
收藏/分享
文件名: A Trained Humanoid Robot can Perform Human-Like Crossmodal Social Attention and Conflict Resolution.pdf
格式: Adobe PDF
所有评论 (0)
暂无评论
 

除非特别说明,本系统中所有内容都受版权保护,并保留所有权利。