PSYCH OpenIR  > 中国科学院行为科学重点实验室
What Can Computational Models Learn From Human Selective Attention? A Review From an Audiovisual Unimodal and Crossmodal Perspective
Fu, Di1,2,3; Weber, Cornelius3; Yang, Guochun1,2; Kerzel, Matthias3; Nan, Weizhi4; Barros, Pablo3; Wu, Haiyan1,2; Liu, Xun1,2; Wermter, Stefan3
第一作者Fu, Di
通讯作者邮箱xun liu liux@psych.ac.cn
心理所单位排序1
摘要

Selective attention plays an essential role in information acquisition and utilization from the environment. In the past 50 years, research on selective attention has been a central topic in cognitive science. Compared with unimodal studies, crossmodal studies are more complex but necessary to solve real-world challenges in both human experiments and computational modeling. Although an increasing number of findings on crossmodal selective attention have shed light on humans' behavioral patterns and neural underpinnings, a much better understanding is still necessary to yield the same benefit for intelligent computational agents. This article reviews studies of selective attention in unimodal visual and auditory and crossmodal audiovisual setups from the multidisciplinary perspectives of psychology and cognitive neuroscience, and evaluates different ways to simulate analogous mechanisms in computational models and robotics. We discuss the gaps between these fields in this interdisciplinary review and provide insights about how to use psychological findings and theories in artificial intelligence from different perspectives.

关键词selective attention visual attention auditory attention crossmodal learning computational modeling deep learning
2020-02-27
语种英语
DOI10.3389/fnint.2020.00010
发表期刊FRONTIERS IN INTEGRATIVE NEUROSCIENCE
ISSN1662-5145
卷号14页码:18
期刊论文类型article
收录类别SCI
资助项目National Natural Science Foundation of China (NSFC)[61621136008] ; German Research Foundation (DFG) under project Transregio Crossmodal Learning[TRR 169] ; CAS-DAAD
出版者FRONTIERS MEDIA SA
WOS关键词HUMAN AUDITORY-CORTEX ; SUPERIOR-COLLICULUS ; MULTISENSORY INTEGRATION ; STIMULUS-DRIVEN ; TOP-DOWN ; NEURAL MECHANISMS ; SPATIAL ATTENTION ; COGNITIVE CONTROL ; VISUAL-ATTENTION ; SALIENCY
WOS研究方向Behavioral Sciences ; Neurosciences & Neurology
WOS类目Behavioral Sciences ; Neurosciences
WOS记录号WOS:000526713900001
Q分类Q3
资助机构National Natural Science Foundation of China (NSFC) ; German Research Foundation (DFG) under project Transregio Crossmodal Learning ; CAS-DAAD
引用统计
被引频次:9[WOS]   [WOS记录]     [WOS相关记录]
文献类型期刊论文
条目标识符http://ir.psych.ac.cn/handle/311026/31553
专题中国科学院行为科学重点实验室
通讯作者Liu, Xun
作者单位1.Chinese Acad Sci, Key Lab Behav Sci, Inst Psychol, Beijing, Peoples R China
2.Univ Chinese Acad Sci, Dept Psychol, Beijing, Peoples R China
3.Univ Hamburg, Dept Informat, Hamburg, Germany
4.Guangzhou Univ, Sch Educ, Dept Psychol, Ctr Brain & Cognit Sci, Guangzhou, Peoples R China
第一作者单位中国科学院行为科学重点实验室
通讯作者单位中国科学院行为科学重点实验室
推荐引用方式
GB/T 7714
Fu, Di,Weber, Cornelius,Yang, Guochun,et al. What Can Computational Models Learn From Human Selective Attention? A Review From an Audiovisual Unimodal and Crossmodal Perspective[J]. FRONTIERS IN INTEGRATIVE NEUROSCIENCE,2020,14:18.
APA Fu, Di.,Weber, Cornelius.,Yang, Guochun.,Kerzel, Matthias.,Nan, Weizhi.,...&Wermter, Stefan.(2020).What Can Computational Models Learn From Human Selective Attention? A Review From an Audiovisual Unimodal and Crossmodal Perspective.FRONTIERS IN INTEGRATIVE NEUROSCIENCE,14,18.
MLA Fu, Di,et al."What Can Computational Models Learn From Human Selective Attention? A Review From an Audiovisual Unimodal and Crossmodal Perspective".FRONTIERS IN INTEGRATIVE NEUROSCIENCE 14(2020):18.
条目包含的文件
文件名称/大小 文献类型 版本类型 开放类型 使用许可
What Can Computation(933KB)期刊论文出版稿限制开放CC BY-NC-SA请求全文
个性服务
推荐该条目
保存到收藏夹
查看访问统计
导出为Endnote文件
谷歌学术
谷歌学术中相似的文章
[Fu, Di]的文章
[Weber, Cornelius]的文章
[Yang, Guochun]的文章
百度学术
百度学术中相似的文章
[Fu, Di]的文章
[Weber, Cornelius]的文章
[Yang, Guochun]的文章
必应学术
必应学术中相似的文章
[Fu, Di]的文章
[Weber, Cornelius]的文章
[Yang, Guochun]的文章
相关权益政策
暂无数据
收藏/分享
所有评论 (0)
暂无评论
 

除非特别说明,本系统中所有内容都受版权保护,并保留所有权利。