|Alternative Title||The mechanism of cnncent nrncessing in vidual-language integration|
In daily life, people are receiving information from different modalities at all time. A large number of cognitive science studies have shown that different information processing modules do not work independently, but have complex interactions. For example, language guides our visual attention, and visual information can also feedback and affect language processing at the same time. Aiming at revealing the mechanism of the cross-modal integration of linguistic and visual information, most researchers have investigated from different perspectives. But researchers have different opinions. In this study, we systematically examined the cross-modal integration of linguistic and visual information from the perspective of conceptual features by two eye-tracking studies.We focus on three issues: (1) whether the integration of language and visual information depends on the retrieval of conceptual features, and whether the shift of our visual attention is driven by conceptual features during the integration; (2) whether the degree of conceptual overlap affects eye movement patterns; (3) the source of conceptual features involved in integration: perceived information or stored knowledge.
In the first study, we explored whether the integration of visual and linguistic information depends on the extraction of conceptual features and examined how the degree of conceptual overlap affect visual attention indirectly. We adopt Rosch et al.(1976)'s definition of category concept, in which concepts are divided into subordinate concepts (sparrows), basic concepts (birds), and superordinate concepts (animals) according to their abstraction. In Experiment 1，we manipulated the degree of conceptual overlap by varying the spoken words across different hierarchical levels and found that participants looked at the target objects more often when there was greater featural overlap between visual objects and spoken words, suggesting that the degree of conceptual overlap between spoken words and visual objects is the major factor that determines the probability that people look at the target object. Our results supported the Conceptual Overlap Hypothesis. In experiment 2, children were selected as subjects to repeat experiment 1 .The results found similar pattern of visual attention as adults. In summary, the first study found that visual and linguistic information integrates at the conceptual level, and the degree of conceptual overlap affect our visual attention.
In the second study, we manipulated overlapping conceptual features between visual and linguistic information directly, and further examined the source of conceptual features during the integration. First, in Experiment 3, we manipulated the preview duration, and examined the shape effect in the integration of visual and linguistic information. It was found that regardless of the preview duration, our eyes would fixate the objects that have the same shape as the words they heard. Moreover, the shape of target picture we used was different from the shape of referents of spoken words (e.g., a closed "umbrella" and the spoken word "mushroom"), and results indicated that the shape features can be retrieved from semantic knowledge in long-term memory during the integration of visual and linguistic information. In Experiment 4a/4b, in addition to repeating the shape effect of Experiment 3, we also selected visual objects and spoken words with similar colors as the color condition, and with the similar color and shape at the same time as the color+shape condition. What we interested in is the color effect and whether multiple overlapping conceptual features would induce more visual attention than the overlap of one feature. As a result, not only the shape effect was repeated, but also the color effect was found, that is, eye movements would point to objects with similar colors to the spoken words (for example, picture "tomato" and spoken word "fire extinguisher"). Most importantly, we found that the fixation probability under color+shape condition was greater than either color or shape conditions, indicating multiple overlapping conceptual features between visual and linguistic representations will cause more eye movements.
In summary, we found that: (1) language and visual information integrates at the conceptual level and depends on the activation of conceptual features; (2) the degree of conceptual overlap will affect the visual attention during the integration of visual and linguistic information. The more common conceptual features, the greater the fixation probability, providing a direct evidence for the conceptual overlap hypothesis; (3) concept features during the integration can not only be extracted from our stored semantic knowledge, but also can receive from the perceived input. Also, different conceptual features have different activation degrees. By examining these issues, we can have a deeper understanding of the integration of language and visual information, and provide a theoretical support for the further studies from different fields.
|Keyword||语言加工 视觉注意 跨通道整合 概念特征 视觉情境范式|
|Place of Conferral||中国科学院心理研究所|
|韩海宾. 概念加工在视听跨诵道整合中的作用和制[D]. 中国科学院心理研究所. 中国科学院心理研究所,2020.|
|Files in This Item:|
|韩海宾-博士学位论文.pdf（2437KB）||学位论文||限制开放||CC BY-NC-SA||Application Full Text|
|Recommend this item|
|Export to Endnote|
|Similar articles in Google Scholar|
|Similar articles in Baidu academic|
|Similar articles in Bing Scholar|
Items in the repository are protected by copyright, with all rights reserved, unless otherwise indicated.