PSYCH OpenIR  > 脑与认知科学国家重点实验室
Modeling implicit learning in a cross-modal audio-visual serial reaction time task
Taesler, Philipp1; Jablonowski, Julia1; Fu, Qiufang2; Rose, Michael1
第一作者Taesler, Philipp
2018
发表期刊Cognitive Systems Research
通讯作者邮箱p.taesler@uke.de
文章类型article
产权排序2
摘要This study examined implicit learning in a cross-modal condition, where visual and auditory stimuli were presented in an alternating fashion. Each cross-modal transition occurred with a probability of 0.85, enabling participants to gain a reaction time benefit by learning the cross-modal predictive information between colors and tones. Motor responses were randomly remapped to ensure that pure perceptual learning took place. The effect for the implicit learning was extracted from the data by fitting five different models to the data, which was highly variable due to motor variability. To examine individual learning rates for stimulus types of different discriminability and modality, the models were fitted per stimulus type and individually for each participant. The model selection identified the model that included motor variability, surprise effects for deviants and a serial position for effect onset as the most explanatory (Akaike weight 0.87). Further, there was a significant global cross-modal implicit learning effect for predictable versus deviant transitions (40 ms reaction time difference, p © 2018 Elsevier B.V.
关键词Audio-visual - Cross-modal - Cross-modal representations - Implicit learning - Individual learning - Perceptual learning - Predictive information - Reaction-time task
学科领域Models - System Theory
学科门类Reaction rates
DOI10.1016/j.cogsys.2018.10.002
收录类别EI
语种英语
出版者Elsevier B.V.
引用统计
文献类型期刊论文
条目标识符http://ir.psych.ac.cn/handle/311026/27768
专题脑与认知科学国家重点实验室
通讯作者Taesler, Philipp
作者单位1.University Medical Center Hamburg Eppendorf, Institute for Systems Neuroscience, Hamburg, Germany;
2.State Key Laboratory of Brain and Cognitive Science, Institute of Psychology, Chinese Academy of Sciences, Beijing, China
推荐引用方式
GB/T 7714
Taesler, Philipp,Jablonowski, Julia,Fu, Qiufang,et al. Modeling implicit learning in a cross-modal audio-visual serial reaction time task[J]. Cognitive Systems Research,2018.
APA Taesler, Philipp,Jablonowski, Julia,Fu, Qiufang,&Rose, Michael.(2018).Modeling implicit learning in a cross-modal audio-visual serial reaction time task.Cognitive Systems Research.
MLA Taesler, Philipp,et al."Modeling implicit learning in a cross-modal audio-visual serial reaction time task".Cognitive Systems Research (2018).
条目包含的文件
文件名称/大小 文献类型 版本类型 开放类型 使用许可
Modeling implicit le(1392KB)期刊论文出版稿限制开放CC BY-NC-SA浏览 请求全文
个性服务
推荐该条目
保存到收藏夹
查看访问统计
导出为Endnote文件
谷歌学术
谷歌学术中相似的文章
[Taesler, Philipp]的文章
[Jablonowski, Julia]的文章
[Fu, Qiufang]的文章
百度学术
百度学术中相似的文章
[Taesler, Philipp]的文章
[Jablonowski, Julia]的文章
[Fu, Qiufang]的文章
必应学术
必应学术中相似的文章
[Taesler, Philipp]的文章
[Jablonowski, Julia]的文章
[Fu, Qiufang]的文章
相关权益政策
暂无数据
收藏/分享
文件名: Modeling implicit learning in a cross-modal audio-visual serial reaction time task.pdf
格式: Adobe PDF
所有评论 (0)
暂无评论
 

除非特别说明,本系统中所有内容都受版权保护,并保留所有权利。