AIユーザーに学習データの多様性を示すことで、知覚される公平性と信頼が高まる(Showing AI users diversity in training data boosts perceived fairness and trust)

ad

2024-010-18  ペンシルベニア州立大学(PennState)

AIユーザーに学習データの多様性を示すことで、知覚される公平性と信頼が高まる(Showing AI users diversity in training data boosts perceived fairness and trust)Accessibility to AI systems training data and labeler information promotes transparency and accountability of those systems, according to Penn State researchers. Credit: Kateryna/Adobe Stock. All Rights Reserved.

ペンシルベニア州立大学の研究によると、AIシステムのトレーニングデータやラベラーの人種構成に関する情報を公開することで、AIの公平性や信頼性が向上することが示されました。研究では、トレーニングデータやラベラーの多様性を視覚的に提示することで、ユーザーのAIに対する期待と評価が改善されることが確認されました。また、フィードバックの提供がユーザーの主体性を高め、システムの利用意欲を向上させる効果も見られました。

<関連情報>

アルゴリズムのバイアスの伝達と対策:データの多様性、ラベラーの多様性、パフォーマンスのバイアス、ユーザーフィードバックがAIの信頼に及ぼす影響 Communicating and combating algorithmic bias: effects of data diversity, labeler diversity, performance bias, and user feedback on AI trust

Cheng Chen & S. Shyam Sundar

Human-Computer Interaction  Published:03 Oct 2024

DOI:https://doi.org/10.1080/07370024.2024.2392494

ABSTRACT

Inspired by the emerging documentation paradigm emphasizing data and model transparency, this study explores whether displaying racial diversity cues in training data and labelers’ backgrounds enhance users’ expectations of algorithmic fairness and trust in AI systems, even to the point of making them overlook racially biased performance. It also explores how their trust is affected when the system invites their feedback. We conducted a factorial experiment (N=597) to test hypotheses derived from a model of Human-AI Interaction based on the Theory of Interactive Media Effects (HAII-TIME). We found that racial diversity cues in either training data or labelers’ backgrounds trigger the representativeness heuristic, which is associated with higher algorithmic fairness expectations and increased trust. Inviting feedback enhances users’ sense of agency and is positively related to behavioral trust, but it reduces usability for Whites when the AI shows unbiased performance. Implications for designing socially responsible AI interfaces are discussed, considering both users’ cognitive limitations and usability.

1600情報工学一般
ad
ad
Follow
ad
タイトルとURLをコピーしました