2024-10-17 インペリアル・カレッジ・ロンドン(ICL)
<関連情報>
- https://www.imperial.ac.uk/news/257159/humans-protect-ai-bots-from-playtime/
- https://onlinelibrary.wiley.com/doi/10.1155/2024/8864909
人間はAI仮想エージェントを社会的存在として無頓着に扱うが、この傾向は若年層では減少する: サイバーボール実験による証拠 Humans Mindlessly Treat AI Virtual Agents as Social Beings, but This Tendency Diminishes Among the Young: Evidence From a Cyberball Experiment
Jianan Zhou, Talya Porat, Nejra van Zalk
Human Behavior and Emerging Technologies Published: 27 September 2024
DOI:https://doi.org/10.1155/2024/8864909
Abstract
The “social being” perspective has largely influenced the design and research of AI virtual agents. Do humans really treat these agents as social beings? To test this, we conducted a 2 between (Cyberball condition: exclusion vs. fair play) × 2 within (coplayer type: AGENT vs. HUMAN) online experiment employing the Cyberball paradigm; we investigated how participants (N = 244) responded when they observed an AI virtual agent being ostracised or treated fairly by another human in Cyberball, and we compared our results with those from human–human Cyberball research. We found that participants mindlessly applied the social norm of inclusion, compensating the ostracised agent by tossing the ball to them more frequently, just as people would to an ostracised human. This finding suggests that individuals tend to mindlessly treat AI virtual agents as social beings, supporting the media equation theory; however, age (no other user characteristics) influenced this tendency, with younger participants less likely to mindlessly apply the inclusion norm. We also found that participants showed increased sympathy towards the ostracised agent, but they did not devalue the human player for their ostracising behaviour; this indicates that participants did not mindfully perceive AI virtual agents as comparable to humans. Furthermore, we uncovered two other exploratory findings: the association between frequency of agent usage and sympathy, and the carryover effect of positive usage experience. Our study advances the theoretical understanding of the human side of human–agent interaction. Practically, it provides implications for the design of AI virtual agents, including the consideration of social norms, caution in human-like design, and age-specific targeting.