其他摘要 | In daily life, we use language through multiple modalities, including listening, speaking, reading, and writing. In each modality, lexical knowledge, which means the various words properties can affect lexical processing. For example, frequently used words are processed more quickly than infrequently used words in all modalities. How lexical knowledge is represented and whether lexical knowledge is shared across different modalities are important theoretical questions in psycholinguistics. It is important to note that adults continue to learn lexical knowledge while using language. Whether newly acquired lexical knowledge can be transferred across modalities is still an open question. To explore these issues, we identified the degree of sharing of lexical knowledge across modalities by examining whether word frequency knowledge can be transferred from the auditory modality to the visual modality.
In Study 1, we designed training and test sessions within two experiments. In the training session, the word frequency of target words was improved by training, while the overlapping ambiguous string segmentation test was conducted after a period of time to observe the word frequency effects of the trained words in the visual modality. The overlapping ambiguous string is a three-character combination, where both two characters on the left and two characters on the right can form a two-character word (e.g., “旁白板”, in which the left-side word “旁白” means narrator, and the right-side word “白板” means white board), so it has two forms of segmentation. It has been shown that the segmentation of overlapping ambiguous strings is sensitive to word frequency, and high-frequency words are more likely to be segmented. We presented target words in auditory modality in the training session of Experiment 1, and in visual modality in the training session of Experiment 2. Target wrods were presented in visual modality in the test session of both experiments. We examined whether word frequency knowledge could be transferred from auditory modality to visual modality in Experiment 1, and compared the word frequency effect of cross-moal transfer with the word frequency effect of within-modality training by comparing the two experiments. The results of both experiments showed that trained words were more likely to be segmented into words than untrained words both in the 5-minute interval test and 7-day interval test. The results suggested that word frequency knowledge can be cross-modal transferred, and the newly acquired word frequency effect is stable over time. Comparing the results of the two experiments showed that the magnitude of word frequency effect for cross-modal transfer were similar to that of within-modality training.
In Study 2, we used the eye-tracking technique to examine the specific performance of cross-modal transfer effects of word frequency during Chinese reading. Three conditions, auditory training, visual training and no training, were designed in the training session of Experiment 3. In the test session, participants performed two test tasks successively: overlapping ambiguous string segmentation and sentence reading. The results showed that the training effect in the segmentation test was consistent with Study 1, and the probability of words being segmented into words in the test session was significantly higher for either auditory or visual trained words than for untrained words. Eye-movement results also showed significant cross-modal and within-modality training effects. Target words that were trained in visual or auditory modality had shorter first fixation duration and gaze duration in sentence reading, and shorter total reading time. These results suggested that word frequency knowledge trained through the auditory modality can be transferred into visual modality and exhibit word frequency effects in reading tasks.
In summary, two studies showed that word frequency knowledge acquired through training in the auditory modality can be transferred to the visual modality, and the magnitude of the effect is similar to that of word frequency knowledge obtained through training within the visual modality. These results suggested that word frequency knowledge is shared to some extent between visual modality and auditory modality, and supported that lexical knowledge such as word frequency is more likely to be stored at the semantic level in the cognitive models of language. This study has important theoretical implications for a more comprehensive understanding of the cognitive mechanisms of lexical processing, and can provide important support for the development of lexical recognition models. In addition, the study has important
practical value, revealing that studies related to statistical lexical word frequency consider word frequencies of written language along with those of spoken language. |
修改评论