PSYCH OpenIR  > 脑与认知科学国家重点实验室
Modeling implicit learning in a cross-modal audio-visual serial reaction time task
Taesler, Philipp1; Jablonowski, Julia1; Fu, Qiufang2; Rose, Michael1
通讯作者Taesler, Philipp(p.taesler@uke.de)
摘要This study examined implicit learning in a cross-modal condition, where visual and auditory stimuli were presented in an alternating fashion. Each cross-modal transition occurred with a probability of 0.85, enabling participants to gain a reaction time benefit by learning the cross-modal predictive information between colors and tones. Motor responses were randomly remapped to ensure that pure perceptual learning took place. The effect for the implicit learning was extracted from the data by fitting five different models to the data, which was highly variable due to motor variability. To examine individual learning rates for stimulus types of different discriminability and modality, the models were fitted per stimulus type and individually for each participant. The model selection identified the model that included motor variability, surprise effects for deviants and a serial position for effect onset as the most explanatory (Akaike weight 0.87). Further, there was a significant global cross-modal implicit learning effect for predictable versus deviant transitions (40 ms reaction time difference, p < 0.004). The learning rates over time differed for both modality and the stimuli within modalities, although there was no correlation to global error rates or reaction time differences between the stimulus types. These results demonstrate a modeling method that is well suited to extract detailed information about the success of implicit learning from high variability data. It further shows a cross-modal implicit learning effect, which extends the understanding of the implicit learning system and highlights the possibility for information to be processed in a cross-modal representation without conscious processing. (C) 2018 Elsevier B.V. All rights reserved.
关键词Implicit learning Cross-modal Modeling Serial reaction time task Audio-visual
2019-05-01
语种英语
DOI10.1016/j.cogsys.2018.10.002
发表期刊COGNITIVE SYSTEMS RESEARCH
ISSN1389-0417
卷号54页码:154-164
收录类别SCI
资助项目German Research Foundation DFG Crossmodal Learning: Adaptivity, Prediction and Interaction[TRR 169] ; German Research Foundation DFG Crossmodal Learning: Adaptivity, Prediction and Interaction[TRR 169]
出版者ELSEVIER SCIENCE BV
WOS关键词MECHANISMS ; SEQUENCES ; SELECTION ; EXPLICIT
WOS研究方向Computer Science ; Neurosciences & Neurology ; Psychology
WOS类目Computer Science, Artificial Intelligence ; Neurosciences ; Psychology, Experimental
WOS记录号WOS:000455740800012
资助机构German Research Foundation DFG Crossmodal Learning: Adaptivity, Prediction and Interaction
引用统计
被引频次:9[WOS]   [WOS记录]     [WOS相关记录]
文献类型期刊论文
条目标识符http://ir.psych.ac.cn/handle/311026/27768
专题脑与认知科学国家重点实验室
通讯作者Taesler, Philipp
作者单位1.Univ Med Ctr Hamburg Eppendorf, Inst Syst Neurosci, Martinistr 52,Bldg W34,320b, Hamburg, Germany
2.Chinese Acad Sci, Inst Psychol, State Key Lab Brain & Cognit Sci, Beijing, Peoples R China
推荐引用方式
GB/T 7714
Taesler, Philipp,Jablonowski, Julia,Fu, Qiufang,et al. Modeling implicit learning in a cross-modal audio-visual serial reaction time task[J]. COGNITIVE SYSTEMS RESEARCH,2019,54:154-164.
APA Taesler, Philipp,Jablonowski, Julia,Fu, Qiufang,&Rose, Michael.(2019).Modeling implicit learning in a cross-modal audio-visual serial reaction time task.COGNITIVE SYSTEMS RESEARCH,54,154-164.
MLA Taesler, Philipp,et al."Modeling implicit learning in a cross-modal audio-visual serial reaction time task".COGNITIVE SYSTEMS RESEARCH 54(2019):154-164.
条目包含的文件
文件名称/大小 文献类型 版本类型 开放类型 使用许可
Modeling implicit le(1392KB)期刊论文出版稿限制开放CC BY-NC-SA请求全文
个性服务
推荐该条目
保存到收藏夹
查看访问统计
导出为Endnote文件
谷歌学术
谷歌学术中相似的文章
[Taesler, Philipp]的文章
[Jablonowski, Julia]的文章
[Fu, Qiufang]的文章
百度学术
百度学术中相似的文章
[Taesler, Philipp]的文章
[Jablonowski, Julia]的文章
[Fu, Qiufang]的文章
必应学术
必应学术中相似的文章
[Taesler, Philipp]的文章
[Jablonowski, Julia]的文章
[Fu, Qiufang]的文章
相关权益政策
暂无数据
收藏/分享
所有评论 (0)
暂无评论
 

除非特别说明,本系统中所有内容都受版权保护,并保留所有权利。