Institutional Repository, Institute of Psychology, Chinese Academy of Sciences
Modeling implicit learning in a cross-modal audio-visual serial reaction time task | |
Taesler, Philipp1; Jablonowski, Julia1; Fu, Qiufang2; Rose, Michael1 | |
通讯作者 | Taesler, Philipp(p.taesler@uke.de) |
摘要 | This study examined implicit learning in a cross-modal condition, where visual and auditory stimuli were presented in an alternating fashion. Each cross-modal transition occurred with a probability of 0.85, enabling participants to gain a reaction time benefit by learning the cross-modal predictive information between colors and tones. Motor responses were randomly remapped to ensure that pure perceptual learning took place. The effect for the implicit learning was extracted from the data by fitting five different models to the data, which was highly variable due to motor variability. To examine individual learning rates for stimulus types of different discriminability and modality, the models were fitted per stimulus type and individually for each participant. The model selection identified the model that included motor variability, surprise effects for deviants and a serial position for effect onset as the most explanatory (Akaike weight 0.87). Further, there was a significant global cross-modal implicit learning effect for predictable versus deviant transitions (40 ms reaction time difference, p < 0.004). The learning rates over time differed for both modality and the stimuli within modalities, although there was no correlation to global error rates or reaction time differences between the stimulus types. These results demonstrate a modeling method that is well suited to extract detailed information about the success of implicit learning from high variability data. It further shows a cross-modal implicit learning effect, which extends the understanding of the implicit learning system and highlights the possibility for information to be processed in a cross-modal representation without conscious processing. (C) 2018 Elsevier B.V. All rights reserved. |
关键词 | Implicit learning Cross-modal Modeling Serial reaction time task Audio-visual |
2019-05-01 | |
语种 | 英语 |
DOI | 10.1016/j.cogsys.2018.10.002 |
发表期刊 | COGNITIVE SYSTEMS RESEARCH |
ISSN | 1389-0417 |
卷号 | 54页码:154-164 |
收录类别 | SCI |
资助项目 | German Research Foundation DFG Crossmodal Learning: Adaptivity, Prediction and Interaction[TRR 169] ; German Research Foundation DFG Crossmodal Learning: Adaptivity, Prediction and Interaction[TRR 169] |
出版者 | ELSEVIER SCIENCE BV |
WOS关键词 | MECHANISMS ; SEQUENCES ; SELECTION ; EXPLICIT |
WOS研究方向 | Computer Science ; Neurosciences & Neurology ; Psychology |
WOS类目 | Computer Science, Artificial Intelligence ; Neurosciences ; Psychology, Experimental |
WOS记录号 | WOS:000455740800012 |
资助机构 | German Research Foundation DFG Crossmodal Learning: Adaptivity, Prediction and Interaction |
引用统计 | |
文献类型 | 期刊论文 |
条目标识符 | http://ir.psych.ac.cn/handle/311026/27768 |
专题 | 脑与认知科学国家重点实验室 |
通讯作者 | Taesler, Philipp |
作者单位 | 1.Univ Med Ctr Hamburg Eppendorf, Inst Syst Neurosci, Martinistr 52,Bldg W34,320b, Hamburg, Germany 2.Chinese Acad Sci, Inst Psychol, State Key Lab Brain & Cognit Sci, Beijing, Peoples R China |
推荐引用方式 GB/T 7714 | Taesler, Philipp,Jablonowski, Julia,Fu, Qiufang,et al. Modeling implicit learning in a cross-modal audio-visual serial reaction time task[J]. COGNITIVE SYSTEMS RESEARCH,2019,54:154-164. |
APA | Taesler, Philipp,Jablonowski, Julia,Fu, Qiufang,&Rose, Michael.(2019).Modeling implicit learning in a cross-modal audio-visual serial reaction time task.COGNITIVE SYSTEMS RESEARCH,54,154-164. |
MLA | Taesler, Philipp,et al."Modeling implicit learning in a cross-modal audio-visual serial reaction time task".COGNITIVE SYSTEMS RESEARCH 54(2019):154-164. |
条目包含的文件 | ||||||
文件名称/大小 | 文献类型 | 版本类型 | 开放类型 | 使用许可 | ||
Modeling implicit le(1392KB) | 期刊论文 | 出版稿 | 限制开放 | CC BY-NC-SA | 请求全文 |
除非特别说明,本系统中所有内容都受版权保护,并保留所有权利。
修改评论