PSYCH OpenIR  > 社会与工程心理学研究室
基于面部视频行为分析的决策风格自动感知技术研究
其他题名Research on decision-making style automatic perception technology based on facial video behavior analysis
过宇
导师刘晓倩
2022-06
摘要

决策风格是个体在决策情境中表现出的行为模式,影响着个体的方方面面并且在医疗、商业等领域发挥巨大作用。如何准确、便捷地识别个体的决策风格,是具有实际应用需求的。本研究探索了个体面部活动与决策风格的关联关系,将数据挖掘技术应用于心理学研究,构建决策风格识别模型,并对模型的信效度进行评估,具体包括如下两项研究: 研究一:个体面部活动与决策风格相关性分析研究。

研究一: 利用微软Kinect摄像头采集个体在自我介绍过程中面部的36个关键点运动变化数据以及17个面部活动单元(以下简称面部AU)的变化情况,通过与通用决策风格问卷( GDMS )测评结果的统计学分析研究发现,下嘴唇、左脸颊与下巴区域的面部关键点活动的离散程度与回避维度呈显著正相关。嘴角、脸颊区域的面部AU的离散程度与理性维度呈显著正相关;脸颊区域的面部 AL〕的离散程度与直觉维度呈显著正相关;嘴唇、嘴角、脸颊、眼眉区域的面部AL〕的离散程度与回避和冲动维度呈显著正相关。该研究结果从数据驱动的角度验证了面部活动与决策风格具有密切的相关,也进一步说明基于面部活动分析的决策风格自动识别技术具有可行性。

研究二主要研究基于面部运动的决策风格自动识别模型构建方法。通过四项子研究,分别基于面部关键点及面部AU单元的特征组合探究基于面部视频分析的决策风格自动识别技术有效性:

1)基于全量面部关键点数据构建决策风格识别模型。模型的分半信度在0.7左右,采用线性回归算法,校标效度在0.4到0.6之间。

2)基于36个面部活动较为频繁的关键点数据构建决策风格识别模型。采用线性回归算法,除了在回避维度上性能较差,其他四个维度的校标效度在0.36到0.57之间。

3)基于面部AU变化分析的决策风格自动识别模型的构建。采用线性回归算法,构建决策风格自动识别模型。模型的分半信度在0.63到0.76之间,效标效度在0.4到0.6之间。

4)基于面部关键点与AL〕变化特征融合分析的决策风格自动识别模型构建。使用支持向量机回归算法,构建决策风格自动识别模型。模型的分半信度在0.5到0.7左右,效标效度在0.31到0.5 8,在回避维度上显著优于使用线性回归算法训练的模型。

通过上述两项研究,本文初步探索了个体面部活动与决策风格的相关性,提出一种利用面部视频数据识别决策风格的方法。与传统的使用量表施测的方法相比有如下优势:(1)利用生态化的视频采集手段获取面部行为数据,避免了传统测量场景中,由于被试受到主客观因素影响而导致的测量结果偏差;(2)施测效率高,操作便捷,适用于大规模,多场景的评估需求,可作为传统量表施测方法的有效补充。

其他摘要

Decision-making style is the behavioral pattern shown by individuals in decision-making situations, which affects all aspects of the individual and plays a huge role in medical, business and other fields. How to accurately and conveniently identify an individual’s decision-making style has practical application requirements. This study explores the relationship between individual facial activities and decision-making style, applies data mining technology to psychological research, builds a decision-making style recognition model, and evaluates the reliability and validity of the model, including the following two studies:

Study1: An analysis of the correlation between individual facial activities and decision-making style. Study 1 used the Microsoft Kinect camera to collect 36 key points of facial movement change data and the changes of 17 facial activity units (hereinafter referred to as facial AU) during the self-introduction process. Statistical analysis showed that the degree of dispersion of facial key point activities in the lower lip, left cheek and chin area was significantly positively correlated with the avoidance dimension. The degree of dispersion of facial AU in the corners of the mouth and cheeks is significantly positively correlated with the rational dimension; the degree of dispersion of facial AU in the cheek region is significantly positively correlated with the dimension of intuition; the degree of dispersion of facial AU in the areas of lips, mouth, cheeks, and eyebrows is positively correlated with avoidance. There was a significant positive correlation with the impulsive dimension. The results of this study verify the close correlation between facial activity and decision-making style from a data-driven perspective, and further illustrate the feasibility of automatic decision-making style recognition technology based on facial activity analysis.

Study 2 studies the method of building an automatic recognition model for decision making style based on facial motion. Through four sub-studies, we explore the effectiveness of automatic decision-making style recognition technology based on facial video analysis based on the feature combination of facial key points and facial AU units:

1) Build a decision-making style recognition model based on the full amount of facial key point data. The split-half reliability of the model is about 0.7, and the linear regression algorithm is used, and the calibration validity is between 0.4 and 0.6.

2) Build a decision-making style recognition model based on 36 key point data with frequent facial activities. Using the linear regression algorithm, except for the poor performance in the avoidance dimension, the calibration validity of the other four dimensions is between 0.36 and 0.57.

3) Construction of an automatic recognition model of decision-making style based on facial AU change analysis. A linear regression algorithm is used to construct an automatic recognition model of decision-making style. The split-half reliability of the model ranged from 0.63 to 0.76, and the criterion validity ranged from 0.4 to 0.6.

4) Construction of an automatic decision style recognition model based on the fusion analysis of facial key points and AU change features. Use the support vector machine regression algorithm to build a decision-making style automatic recognition model. The split-half reliability of the model is around 0.5 to 0.7, and the criterion validity is between 0.31 and 0.58, which is significantly better than the model trained using the linear regression algorithm in the avoidance dimension.

Through the above tow studies, this paper initially explores the correlation between individual facial activities and decision-making style, and proposes a method to identify decision-making style using facial video data. Compared with the traditional measurement method using scales, it has the following advantages: (1) Using ecological video acquisition methods to obtain facial behavior data, it avoids the measurement caused by the influence of subjective and objective factors in the traditional measurement scene. (2) The measurement efficiency is high and the operation is convenient. It is suitable for large-scale and multi-scenario evaluation needs, and can be used as an effective supplement to the traditional scale measurement method.

关键词Kinect 决策风格 面部视频分析 心理感知 机器学习
学位类型硕士
语种中文
学位名称理学硕士
学位专业应用心理学
学位授予单位中国科学院大学
学位授予地点中国科学院心理研究所
文献类型学位论文
条目标识符https://ir.psych.ac.cn/handle/311026/45016
专题社会与工程心理学研究室
推荐引用方式
GB/T 7714
过宇. 基于面部视频行为分析的决策风格自动感知技术研究[D]. 中国科学院心理研究所. 中国科学院大学,2022.
条目包含的文件
文件名称/大小 文献类型 版本类型 开放类型 使用许可
过宇-硕士学位论文.pdf(1777KB)学位论文 开放获取CC BY-NC-SA请求全文
个性服务
推荐该条目
保存到收藏夹
查看访问统计
导出为Endnote文件
谷歌学术
谷歌学术中相似的文章
[过宇]的文章
百度学术
百度学术中相似的文章
[过宇]的文章
必应学术
必应学术中相似的文章
[过宇]的文章
相关权益政策
暂无数据
收藏/分享
所有评论 (0)
暂无评论
 

除非特别说明,本系统中所有内容都受版权保护,并保留所有权利。