其他摘要 | As the brain of machines, artificial intelligence makes machines smarter. From human-machine interaction to human-machine teaming, the role of machines becomes more and more diverse. Regardless of how the human-machine relationship evolves, human-machine trust remains a key variable in determining the level of collaboration. Researchers have extensively studied how to establish an appropriate level of trust to promote the efficiency and safety of human-machine collaboration. Previous researchers have found that the intelligence level of a machine is an important variable affecting human-machine trust. However, previous studies failed to consider the effect of perceived intelligence and the construct complexity of human-machine trust.
Additionally, there are some limitations of existing instruments to assess perceived intelligence and human-machine trust. To address these gaps, this current research conducted three studies, aiming to develop assessment tools for perceived intelligence and human-machine trust, and further explore the relationship between AI intelligence and human-machine trust.
Study 1 developed and validated a more general intelligence scale for AI through a literature review, expert interviews (N=15) and three rounds of quantitative analysis (N=1033). The scale measures users' perceived intelligence of AI across five dimensions: perceived utility, perceived security, perceived explainability, perceived generalization and perceived autonomy. The Cronbach's a for each of the five factors was higher than 0.70, demonstrating good reliability. The score on this scale was positively correlated with human-machine trust (r=0.56) and apositive attitude toward AI (r=0.41), providing evidence for its strong criterion validity. Additionally, the scale demonstrated good incremental validity, as it explained additional variance in predicting human-machine trust beyond usability. The new instrument for perceived AI's intelligence enriches AI measuring theories and is applicable to various AI products, demonstrating its theoretical and practical values.
Study 2 aimed to translate an English human-machine trust scale into Chinese and validate it. Exploratory factor analysis (N=444) and confirmatory factor analysis (N=426) were used to determine the structure of the resulting scale, which included two dimensions: performance trust and moral trust. The Cronbach's a of the two factors were 0.87 and 0.92 respectively, indicating good reliability. The score on this scale was significantly positively correlated with reliance-self (r=0.58), reliance-others (r=0.61), and willingness to use (r=0.71), demonstrating good criterion validity. Additionally, the scale showed good construct validity, withAVEs greater than 0.5, CRs greater than 0.7, and HTMT less than 0.85. Therefore, the Chinese human-machine trust scale was an effective tool.
Study 3 explored the mediating effect of perceived intelligence on human-machine trust through two experiments. Experiment 3a (N= 63) explored the influence of utility and selfishness on human-machine trust. The results showed that utility had two mediating paths (perceived utility and perceived selfishness), while selfishness only had the mediating path of perceived selfishness. Experiment 3b (N=58) explored the effects of autonomy, security, and selfishness on human-machine trust. The results showed that both autonomy and security had three mediating paths (perceived autonomy, perceived security, and perceived selfishness), while selfishness had two mediating paths (perceived security and perceived selfishness). Experiments 3a and 3b revealed a more comprehensive path between objective machine attributes and human-machine trust, and further supported the correspondence between physical quantities and psychological quantities. However, this correspondence relationship was different between intelligence and selfishness. The subjective feelings brought by intelligence were more abundant, including perceived utility, perceived autonomy, perceived security, and perceived selfishness, while the subjective feelings brought by selfishness were quite limited, mainly reflecting in the perceived selfishness.
The current study developed two reliable and valid instruments to measure perceived intelligence and human-machine trust, which enriched the existing literature on this topic. Additionally, based on the new tools, the current study explored the mediating effect of perceived intelligence in the relationship between AI's intelligence and human-machine trust. In summary, this study has both theoretical and practical implications. |
修改评论