PSYCH OpenIR
视障儿童触视觉融合认知图形设计研究
其他题名Cognitive Graphic Design of Tactile Visual Fusion in Visually Impaired Children
徐佳1; 林子翔2; 高卉3; 禤宇明4
第一作者徐佳
心理所单位排序4
摘要

目的 针对视障儿童群体认知渠道受限、感觉通道缺失导致的图形图像认知困难的问题,设计帮助他们加深理解对象,提高认知水平,增加学习趣味的辅助认知图形工具。方法 通过融合多种感觉信息的方法提高触摸图形的信息量,基于盲人的V-T-M图像认知模式,采用结构化问卷调查、结构化设计流程,对小学教材中的主要内容对象进行触视觉融合图形设计,并采用CAT同感评估技术评估设计效果。结果 设计了适用于小学视障儿童学习使用的视觉与触觉融合系列图形,显著改善了视障儿童对课文对象的认知清晰度,同时增加了他们的理解程度、想象力、学习兴趣和美感体验。结论 基于触视觉融合的图形设计在一定程度上满足了视障儿童的图形认知需求,明显改善了视障儿童对课堂学习内容的认知效果。

其他摘要

Humans are emotional creatures. Emotion plays a key role in various intelligent actions, including perception, decision-making, logical reasoning, and social interaction. Emotion is an important and dispensable component in the realization of human-computer interaction and machine intelligence. Recently, with the explosive growth of multimedia data and the rapid development of artificial intelligence, affective computing and understanding has attracted much research attention. It aims to establish a harmonious human-computer environment by giving the computing machines the ability to recognize, understand, express, and adapt to human emotions, and to make computers have higher and more comprehensive intelligence. Based on the input signals, such as speech, text, image, action and gait, and physiological signals, affective computing and understanding can be divided into multiple research topics. In this paper, we will comprehensively review the development of four important topics in affective computing and understanding, including multi-modal emotion recognition, autism emotion recognition, affective image content analysis, and facial expression recognition. For each topic, we first introduce the research background, problem definition, and research significance. Specifically, we introduce how such topics were proposed, what the corresponding tasks do, and why it is important in different applications. Second, we introduce the international and domestic research on emotion data annotation, feature extraction, learning algorithms, performance comparison and analysis of some representative methods, and famous research teams. Emotion data annotation is conducted to evaluate the performances of affective computing and understanding algorithms. We briefly summarize how categorical and dimensional emotion representation models in psychology are used to construct datasets and the comparisons between these datasets. Feature extraction aims to extract discriminative features to represent emotions. We summarize both hand-crafted features in the early years and deep features in the deep learning era. Learning algorithms aim to learn a mapping between extracted features and emotions. We also summarize and compare both traditional and deep models. For a better understanding of how existing methods work, we report the emotion recognition results of some representative and influential methods on multiple datasets and give some detailed analysis. To better track the latest research for beginners, we briefly introduce some famous research teams with their research focus and main contributions. After that, we systematically compare the international and domestic research, and analyze the advantages and disadvantages of domestic research, which would motivate and boost the future research for domestic researchers and engineers. Finally, we discuss some challenges and promising research directions in the future for each topic, such as 1) image content and context understanding, viewer contextual and prior knowledge modeling, group emotion clustering, viewer and image interaction, and efficient learning for affective image content analysis; 2) data collection and annotation, real-time facial expression analysis, hybrid expression recognition, personalized emotion expression, and user privacy. Since emotion is an abstract, subjective, and complex high-level semantic concept, there are still some limitations of existing methods, and many challenges still remain unsolved. Such promising future research directions would help to reach the emotional intelligence for a better human-computer interaction.

关键词感觉融合 触摸图形 视障儿童 CAT同感评估
2023
语种中文
DOI10. 11834 / jig. 220085
发表期刊包装工程
ISSN1001-3563
卷号44期号:04页码:255-261,335
项目简介

广东省教育科学“十三五”规划2020年度研究项目(2020GXJK339)

引用统计
文献类型期刊论文
条目标识符http://ir.psych.ac.cn/handle/311026/44799
专题中国科学院心理研究所
作者单位1.广东海洋大学
2.大连民族大学
3.宁波大学科学技术学院
4.中国科学院心理研究所
推荐引用方式
GB/T 7714
徐佳,林子翔,高卉,等. 视障儿童触视觉融合认知图形设计研究[J]. 包装工程,2023,44(04):255-261,335.
APA 徐佳,林子翔,高卉,&禤宇明.(2023).视障儿童触视觉融合认知图形设计研究.包装工程,44(04),255-261,335.
MLA 徐佳,et al."视障儿童触视觉融合认知图形设计研究".包装工程 44.04(2023):255-261,335.
条目包含的文件
文件名称/大小 文献类型 版本类型 开放类型 使用许可
视障儿童触视觉融合认知图形设计研究.pd(1614KB)期刊论文出版稿限制开放CC BY-NC-SA浏览 请求全文
个性服务
推荐该条目
保存到收藏夹
查看访问统计
导出为Endnote文件
谷歌学术
谷歌学术中相似的文章
[徐佳]的文章
[林子翔]的文章
[高卉]的文章
百度学术
百度学术中相似的文章
[徐佳]的文章
[林子翔]的文章
[高卉]的文章
必应学术
必应学术中相似的文章
[徐佳]的文章
[林子翔]的文章
[高卉]的文章
相关权益政策
暂无数据
收藏/分享
文件名: 视障儿童触视觉融合认知图形设计研究.pdf
格式: Adobe PDF
所有评论 (0)
暂无评论
 

除非特别说明,本系统中所有内容都受版权保护,并保留所有权利。