チャットボットは社会人口統計的ステレオタイプを過度に強調する(Chatbots overemphasize sociodemographic stereotypes, researchers report)

2026-02-26 ペンシルベニア州立大学

Pennsylvania State Universityの研究チームは、生成AIチャットボットが回答生成時に社会人口統計的ステレオタイプを過度に強調する傾向があることを明らかにした。実験では、職業や行動特性に関する質問に対し、性別や人種などの属性を不必要に関連付けるケースが確認された。これは学習データに内在する偏りやモデル設計上の問題が影響している可能性がある。研究は、AIの公平性・透明性を高めるためにはバイアス評価手法の改善とデータ多様性の確保が不可欠であると指摘し、責任あるAI開発の重要性を強調している。

チャットボットは社会人口統計的ステレオタイプを過度に強調する(Chatbots overemphasize sociodemographic stereotypes, researchers report)
People may be more willing to interact with AI-powered chatbots that represent a particular sociodemographic background, but current bots don’t represent people from some backgrounds well, according to researchers from the College of Information Sciences and Technology. Credit: Cole Handerhan / Penn State. Creative Commons

<関連情報>

二つのアイデンティティの物語:人間とAIが作成したペルソナの倫理的監査 A Tale of Two Identities: An Ethical Audit of Human and AI-Crafted Personas

Pranav Narayanan Venkit, Jiayi Li, Yingfan Zhou, Sarah Rajtmajer, Shomir Wilson
arXiv  Submitted on 7 May 2025
DOI:https://doi.org/10.48550/arXiv.2505.07850

Abstract

As LLMs (large language models) are increasingly used to generate synthetic personas particularly in data-limited domains such as health, privacy, and HCI, it becomes necessary to understand how these narratives represent identity, especially that of minority communities. In this paper, we audit synthetic personas generated by 3 LLMs (GPT4o, Gemini 1.5 Pro, Deepseek 2.5) through the lens of representational harm, focusing specifically on racial identity. Using a mixed methods approach combining close reading, lexical analysis, and a parameterized creativity framework, we compare 1512 LLM generated personas to human-authored responses. Our findings reveal that LLMs disproportionately foreground racial markers, overproduce culturally coded language, and construct personas that are syntactically elaborate yet narratively reductive. These patterns result in a range of sociotechnical harms, including stereotyping, exoticism, erasure, and benevolent bias, that are often obfuscated by superficially positive narrations. We formalize this phenomenon as algorithmic othering, where minoritized identities are rendered hypervisible but less authentic. Based on these findings, we offer design recommendations for narrative-aware evaluation metrics and community-centered validation protocols for synthetic identity generation.

1603情報システム・データ工学
ad
ad
Follow
ad
タイトルとURLをコピーしました