PSYCH OpenIR  > 社会与工程心理学研究室
人工智能信任及其影响因素:基于能力和友善维度
其他题名Trust in artificial intelligence and its influencing factors: Based on the dimensions of competence and warmth
李玉刚
导师栾胜华
2024-12
摘要

人工智能(Artificial Intelligence, AI)已在生产和生活的诸多领域得到广泛应用。研究发现,人们对 AI 的信任程度在其对 AI 技术的接纳和使用中至关重要。社会认知观点下的信任理论认为,人们基于能力(competence)和友善(warmth)来感知并形成对AI 的信任,但尚未有实证研究验证这两个维度是否在 AI 信任中起核心作用,也未深入比较人对 AI 的信任(AI 信任)与人对人的信任(人际信任)的异同。此外,大多数 AI 信任研究依赖自我报告数据,缺乏行为层面的实证分析。针对这些问题,本论文通过四项研究(N = 4073)探讨能 力、友善及其他变量在 AI 信任中的作用,比较 AI 信任与人际信任在行为和自我报告上的差异,并探索调控 AI 信任的方法。这些研究深化了对 AI 信任机制 的理解,为促进良好的人机协作提供了实证支持与实践启示。

研究一通过一项大型的代表性问卷调查(N=2187),系统考察了能力、友 善以及其他17个信任影响变量与 AI 信任间的关系。结果表明,人们对 AI 能力和友善程度的感知可较好地解释他们对 AI 信任的水平(R 2 = 0.47);能力和友善在其他变量与 AI信任的关系中起部分中介作用,进一步证明了这两个因素在 AI 信任中起核心作用。基于信任者、信任对象和交互背景的三维框架模型,研究一发现,除了能力和友善,一些 AI 作为信任对象所特有的变量如稳健性和拟人化程度)、部分信任者特征变量(如享乐动机)和交互背景变量(如社会影响)对AI信任也有显著影响。本研究涵盖了目前为止AI信任研究中最广泛的变量,证实了以社会认知观点理解 AI 信任的有效性,明确了能力和友善在 AI 信任中 的核心作用。

基于能力和友善两维度,研究二通过行为实验(N = 644),在自我报告和行为指标上比较了 AI 信任与人际信任的差异。行为信任通过建议的采纳程度(WOA, weight of advice)来衡量。实验要求被试估计照片中人物的年龄,独立判断后与合作对象(AI 或人)合作完成任务,最终由被试做出判断,合作对象负责分配奖励。通过控制建议的准确性和奖励分配比例,操控合作对象的能力与友善程度。结果显示能力可以显著影响 AI 信任和人际信任,而友善虽然是影响人际信任的重要因素,但对 AI 信任的影响相对较弱。总体而言,尽管被试对人类的评价更高,但在行为上却表现出对AI 的偏好,即相比于人类建议,他们更倾向于采纳 AI 的建议。

算法厌恶是近期 AI 态度研究中的一个重要发现。为检验在现实环境决策中个体对 AI 和人类的信任是否存在差异,以及是否表现出对 AI 算法的偏好或厌 恶,研究三开展了四项田野实验(N = 484)。研究采用了建议采纳范式,招募体育爱好者对美国职业篮球联盟(NBA)和欧洲杯足球比赛进行有奖励的预测竞猜。被试可以参考三类建议:智能算法、他人猜测和赌博赔率。结果显示,尽管智能算法预测准确率最高,在篮球比赛中,被试最倾向采纳赌场赔率的建议;而在足球比赛中,被试对智能算法和赌博赔率的选择频率最高,但对不同来源信息的建议采纳度没有差异。总体而言,实验结果未在行为层面发现明显的 AI 算法偏好或厌恶。在以体育比赛预测为代表的现实环境决策中,被试对 AI 算法和人类的信任无明显差异,这可能与 AI 算法能力、任务情境及个体经验有关,需进一步研究探讨。

研究四通过五个行为实验(N = 758)检验了通过增强个体对 AI 的友善和能力感知来调控 AI 信任的多种方法,为AI 信任的干预策略提供了实证基础。研究 4a 发现友善表情反馈对 AI 信任有提升作用,研究 4b 表明显示与 AI 能力匹配的置信度可以有效调控信任水平,研究 4c 发现显示 AI 和被试的历史表现可以增加被试对高能力 AI 的信任,研究 4d发现解释AI出现偏差的原因会降低AI信 任,研究 4e 则初步探索了 AI 素养培训对信任的影响。研究四验证了通过增强个体对 AI 的友善和能力感知可以实现对 AI 信任的调控,为 AI 信任干预措施的制定提供了理论和实践依据。

本论文通过一项大规模问卷调查和三项实验研究,围绕能力和友善维度,考察了影响 AI 信任的关键因素,比较了 AI 信任与人际信任的差别,在理论上 深化了对AI信任形成机制的理解,并在实践上探索了调控AI信任的方法手段。本论文的主要贡献和创新点在于:(1)明确了能力和友善在 AI 信任中的核心作用,验证了 AI 信任的社会认知观点的有效性;(2)在行为和自我报告指标上,对比了 AI 信任与人际信任的差异,发现了 AI 信任在态度与行为之间的分离以 及 AI 偏好现象;(3)在现实的决策情境下考察了个体对 AI 算法和人的信任的 差异,未在行为层面发现明显的 AI 算法偏好或厌恶;(4)考察了通过增强能力和友善感知来调控 AI 信任的不同方式,发现友善反馈、置信度反馈、历史表现反馈和偏差解释反馈可以有效调控 AI 信任,为 AI 交互设计和信任调控提供了重要启示;(5)开发了一个基于建议采纳的行为研究范式,可用于系统检验 AI 信任的影响因素和调控手段。

其他摘要

Artificial Intelligence (AI) has been widely applied across various domains of production and everyday life. Research has found that the level of trust individuals place in AI is crucial to their acceptance and use of AI technologies. Trust theories from a social cognitive perspective suggest that individuals build trust in AI based on its perceived competence and warmth. However, no empirical studies have yet confirmed whether these two dimensions play a core role in AI trust, nor have they deeply compared the differences between trust in AI (AI trust) and interpersonal trust. Furthermore, most research on AI trust relies on self-reported data, lacking empirical analyses from a behavioral perspective. To address these issues, this dissertation presents four studies (N = 4073) to explore the role of competence, warmth, and other variables in AI trust, compares AI trust with interpersonal trust in both behavioral and self-reported measures, and investigates methods for regulating trust in AI. These studies deepen the understanding of the mechanisms behind AI trust and provide empirical support and practical insights for promoting effective human-AI collaboration.

Study 1 conducted a large-scale representative survey (N = 2187) to systematically examine the relationship between competence, warmth, and 17 other trust-related variables with AI trust. The results revealed that individuals’ perceptions of AI competence and warmth could explain their level of AI trust well (R² = 0.47). Competence and warmth partially mediated the relationship between other variables and AI trust, further confirming the central role of these two factors in AI trust. Using a three-dimensional framework of trustor, trustee, and interaction context, Study 1 also found that, in addition to competence and warmth, certain AI-specific variables (e.g., robustness and anthropomorphism), some trustor characteristics (e.g., hedonic motivation), and interaction context factors (e.g., social influence) significantly influenced AI trust. This study encompassed the widest range of variables in AI trust research to date, confirming the validity of understanding AI trust from a social cognitive perspective and highlighting the central role of competence and warmth.

Study 2 focused on the dimensions of competence and warmth, using a behavioral experiment (N = 644) to compare AI trust with interpersonal trust in both self-reported and behavioral measures. Behavioral trust was assessed through the weight of advice (WOA) adopted from suggestions. Participants were tasked with estimating the age of a person in a photograph, making an independent judgment, and then collaborating with either an AI or a human partner to complete the task, with the final judgment made by the participant and rewards distributed by the partner. The competence and warmth of the partner were manipulated by controlling the accuracy of suggestions and the distribution of rewards. Results showed that competence significantly affected both AI trust and interpersonal trust, while warmth, although critical for interpersonal trust, had a relatively weaker impact on AI trust. Overall, despite participants rating humans more favorably, they exhibited a behavioral preference for AI, being more inclined to adopt AI suggestions over human ones.

Algorithm aversion has been identified as an important finding in recent research on attitudes toward AI. To investigate whether individuals exhibit differential trust toward AI and humans in real-world decision-making and whether they show a preference or aversion toward AI algorithms, Study 3 conducted four field experiments (N = 484). Using an advice adoption paradigm, sports enthusiasts were recruited to make incentivized predictions for NBA and UEFA Euro Cup matches. Participants could consult three types of advice: AI algorithms, others’ guesses, and betting odds. The results showed that although AI algorithms had the highest prediction accuracy, participants were most inclined to adopt betting odds in basketball matches. In football matches, participants selected AI algorithms and betting odds most frequently, but there were no significant differences in advice adoption across sources. Overall, the experiment did not reveal a clear preference or aversion toward AI algorithms at the behavioral level. In real-world decision-making, such as sports predictions, no significant difference in trust toward AI or humans was found, which may be related to AI algorithm competence, task context, and individual experience, warranting further investigation.

Study 4 consisted of five behavioral experiments (N = 758) to test various methods for enhancing individuals' perceptions of AI competence and warmth to modulate AI trust, providing an empirical foundation for AI trust interventions. Study 4a found that warm facial feedback improved AI trust, while Study 4b demonstrated that displaying confidence levels matched to AI’s competence effectively modulated trust. Study 4c revealed that showing the historical performance of both AI and participants increased trust in high-competence AI. Study 4d found that explaining the reasons behind AI errors reduced trust in AI, and Study 4e preliminarily explored the impact of AI literacy training on trust. Study 4 confirmed that enhancing perceptions of AI competence and warmth can effectively modulate trust, offering theoretical and practical foundations for developing AI trust intervention strategies.

This dissertation, through a large-scale survey and three experimental studies, examines the key factors influencing AI trust based on competence and warmth, compares AI trust with interpersonal trust, deepens the theoretical understanding of AI trust formation mechanisms, and explores practical methods for modulating AI trust. The main contributions and innovations of this dissertation are as follows: (1) identifying the central role of competence and warmth in AI trust and validating the social cognitive perspective of AI trust; (2) comparing AI trust with interpersonal trust across behavioral and self-reported measures, revealing a dissociation between attitude and behavior in AI trust and identifying a preference for AI; (3) investigating trust differences between AI algorithms and humans in real-world decision contexts, finding no clear preference or aversion toward AI algorithms at the behavioral level; (4) exploring different methods for modulating AI trust by enhancing perceptions of competence and warmth, demonstrating that warmth feedback, confidence feedback, historical performance feedback, and error explanation feedback can effectively modulate AI trust, providing important insights for AI interaction design and trust regulation; and (5) developing a behavioral research paradigm based on advice adoption for systematically examining the influencing factors and intervention methods of AI trust.

关键词AI 信任 能力 友善 算法厌恶 信任调控
学位类型博士
语种中文
学位名称理学博士
学位专业应用心理学
学位授予单位中国科学院大学
学位授予地点中国科学院心理研究所
文献类型学位论文
条目标识符http://ir.psych.ac.cn/handle/311026/49442
专题社会与工程心理学研究室
推荐引用方式
GB/T 7714
李玉刚. 人工智能信任及其影响因素:基于能力和友善维度[D]. 中国科学院心理研究所. 中国科学院大学,2024.
条目包含的文件
文件名称/大小 文献类型 版本类型 开放类型 使用许可
李玉刚-博士学位论文.pdf(9907KB)学位论文 限制开放CC BY-NC-SA请求全文
个性服务
推荐该条目
保存到收藏夹
查看访问统计
导出为Endnote文件
谷歌学术
谷歌学术中相似的文章
[李玉刚]的文章
百度学术
百度学术中相似的文章
[李玉刚]的文章
必应学术
必应学术中相似的文章
[李玉刚]的文章
相关权益政策
暂无数据
收藏/分享
所有评论 (0)
暂无评论
 

除非特别说明,本系统中所有内容都受版权保护,并保留所有权利。