2025-04-22 オランダ・デルフト工科大学(TUDelft)
<関連情報>
- https://www.tudelft.nl/en/ide/delft-design-stories/tu-delft-research-reveals-risk-of-ai-chatbots-influencing-behaviour-and-trust
- https://dl.acm.org/doi/10.1145/3706598.3713579
ピクセルと散文における説得: エージェントの会話における感情的言語と視覚が意思決定に及ぼす影響 Persuasion in Pixels and Prose: The Effects of Emotional Language and Visuals in Agent Conversations on Decision-Making
Hüseyin Uğur Genç, Senthil Chandrasegaran, Tilman Dingler, Himanshu Verma
CHI ’25: Proceedings of the 2025 CHI Conference on Human Factors in Computing Systems Published: 25 April 2025
DOI:https://doi.org/10.1145/3706598.3713579
Abstract
The growing sophistication of Large Language Models allows conversational agents (CAs) to engage users in increasingly personalized and targeted conversations. While users may vary in their receptiveness to CA persuasion, stylistic elements and agent personalities can be adjusted on the fly. Combined with image generation models that create context-specific realistic visuals, CAs have the potential to influence user behavior and decision making. We investigate the effects of linguistic and visual elements used by CAs on user perception and decision making in a charitable donation context with an online experiment (n=344). We find that while CA attitude influenced trust, it did not affect donation behavior. Visual primes played no role in shaping trust, though their absence resulted in higher donations and situational empathy. Perceptions of competence and situational empathy were potential predictors of donation amounts. We discuss the complex interplay of user and CA characteristics and the fine line between benign behavior signaling and manipulation.