|Alternative Title||The Effect of In-group Favoritism on User's Trust in Intelligent System: The Role of Group Identity in Real Group|
In the era of artificial intelligence, intelligent systems play an increasingly important role in everyday life and interact with human beings at an unprecedented frequency. Human trust in such systems become a core research topic since it plays an important role in human-computer interaction. In the past decade, researchers have realized such importance and paid a lot of attention in this field.
Most studies adopted an approach to investigate whether the mechanisms found in the research of interpersonal trust can be transferred to the human-computer trust. Existing research has found that the effect of in-group favoritism, which plays an essential role in interpersonal trust, can also promote the development of human-computer trust. However, current studies only examined such effects in hypothetical groups using the Minimal Group Paradigm. While it is more practical to examine such impacts in real groups, unlike hypothetical groups, real groups are more complex. On the one hand, real groups have a different history, culture, capabilities, and stereotypes, which may influence users' general attitudes and behaviors toward an intelligent system when such a system is given a group tag (e.g., Chinese robots). On the other hand, the users may vary in their levels of group identity (e.g. patriots vs. traitors) and their identity may at different salience (e.g., when the group is threatened by an out-group), which both may influence their trust toward their in-group. Whether users may have different levels of trust toward an intelligent system which has different group membership (in-group vs. out-group) as users? o users' group identity and group membership salience also affect trust? To investigate these two questions, we conducted two studies.
Study 1 explored the impact of intelligent systems with different group memberships and different levels of reliability and individuals with different levels of group identity on user trust in real groups. This study used a 2 (group membership) x 2 (group identity) x 3 (reliability) mixed experimental design, in which group membership was a between一subject variable, group identity, and reliability were within-subject variables, and subjective trust and behavioral trust of the participants were measured. Results were found that reliability affected the trust of the participants in an intelligent system. The participants' trust of the system with high reliability was higher than the system with low reliability. The group membership affected the trust of the participants in the intelligent system. Participants' trust toward the in-group system was higher than the out-group. Then, group identity affected the trust of the participants in the intelligent system. In the situation that the out-group competence was slightly high, the low identity participants had less trust in the in-group system than the out-group system. There was no difference in the trust of the in-group and out-group systems among the participants with high group identity.
Study 2 explored the role of identity salience in moderating the influence of group membership on trust. Among them, Study 2a explored whether the salience of group identity affects the effect of group membership on trust. The study used a mixed experiment design of 2 (group membership) x 2 (identity salience) x 2 (reliability). The results showed that the main effect of salience was significant, and the trust on in-group of identity salience group was lower than that of the control group. In order to examine why the salience in Study 1 reduced the trust of the participants, Study 2b explored the role of different salience methods. The Study used a mixed experiment design of 2 (group membership) x 3 (identity salience) x 2 (reliability). The results showed that the main effect of manipulation was significant; the trust of the official evaluation group was higher than that of other groups. There was an interaction between group membership and group identity, and participants in the high identity group had higher trust in the in-group than the out-group.
In general, this study explored the role of group identity in real groups, and for the first time explores the extent of group identity and the impact of identity on human-machine trust. The study found that in real groups, the group membership of intelligent systems affects users' trust toward them, and group identity moderated the effect of group membership, which showed that the role of group membership in interpersonal trust could be transferred to human-computer trust, which can provide data support for the application of human-machine trust. However, group identity salience cannot moderate the effect of group membership, and the reason for this needed to be further explored.
|Keyword||群体身份 群体认同 信任 智能系统 认同凸显|
|Place of Conferral||中国科学院心理研究所|
|邹翔鹰. 内群体偏好对智能系统信任的影响:真实群体下群体认同的作用[D]. 中国科学院心理研究所. 中国科学院大学,2019.|
|Files in This Item:|
|邹翔鹰-硕士学位论文.pdf（2070KB）||学位论文||限制开放||CC BY-NC-SA||Application Full Text|
|Recommend this item|
|Export to Endnote|
|Similar articles in Google Scholar|
|Similar articles in Baidu academic|
|Similar articles in Bing Scholar|
Items in the repository are protected by copyright, with all rights reserved, unless otherwise indicated.