2024-010-18 ペンシルベニア州立大学(PennState)
Accessibility to AI systems training data and labeler information promotes transparency and accountability of those systems, according to Penn State researchers. Credit: Kateryna/Adobe Stock. All Rights Reserved.
<関連情報>
- https://www.psu.edu/news/research/story/showing-ai-users-diversity-training-data-boosts-perceived-fairness-and-trust
- https://www.tandfonline.com/doi/full/10.1080/07370024.2024.2392494
アルゴリズムのバイアスの伝達と対策:データの多様性、ラベラーの多様性、パフォーマンスのバイアス、ユーザーフィードバックがAIの信頼に及ぼす影響 Communicating and combating algorithmic bias: effects of data diversity, labeler diversity, performance bias, and user feedback on AI trust
Cheng Chen & S. Shyam Sundar
Human-Computer Interaction Published:03 Oct 2024
DOI:https://doi.org/10.1080/07370024.2024.2392494
ABSTRACT
Inspired by the emerging documentation paradigm emphasizing data and model transparency, this study explores whether displaying racial diversity cues in training data and labelers’ backgrounds enhance users’ expectations of algorithmic fairness and trust in AI systems, even to the point of making them overlook racially biased performance. It also explores how their trust is affected when the system invites their feedback. We conducted a factorial experiment (N=597) to test hypotheses derived from a model of Human-AI Interaction based on the Theory of Interactive Media Effects (HAII-TIME). We found that racial diversity cues in either training data or labelers’ backgrounds trigger the representativeness heuristic, which is associated with higher algorithmic fairness expectations and increased trust. Inviting feedback enhances users’ sense of agency and is positively related to behavioral trust, but it reduces usability for Whites when the AI shows unbiased performance. Implications for designing socially responsible AI interfaces are discussed, considering both users’ cognitive limitations and usability.