AIツール改善へ民主的アプローチを提案(Democratic Approach to Challenge, Improve AI Tools)

2026-05-13 ペンシルベニア州立大学(Penn State)

ペンシルベニア州立大学の研究チームは、人工知能(AI)ツールの性能や公平性を向上させるため、「民主的アプローチ」を取り入れた評価・改善手法を提案した。研究では、少数の開発者や専門家だけでAI設計を進めるのではなく、多様な背景を持つ利用者やコミュニティの意見を反映させる仕組みを検討した。具体的には、ユーザー参加型評価、価値観の多様性を考慮したフィードバック収集、公開型の改善プロセスなどを通じて、AIの偏りや不公平性を低減することを目指している。研究者らは、AIシステムが社会的意思決定や情報提供に広く利用される中、透明性と説明責任の確保が不可欠だと指摘した。また、民主的な設計手法は、特定集団に不利益を与えない信頼性の高いAI開発につながる可能性があるとしている。本研究は、AIガバナンスや倫理的AI開発の新たな方向性を示す成果とされる。

AIツール改善へ民主的アプローチを提案(Democratic Approach to Challenge, Improve AI Tools)
A Penn State-led research team tested different ways of sharing decision-making power, seeking a fairer, democratic method to give more people an equal opportunity to shape how AI behaves. Credit: Juan D. Villa Romero. All Rights Reserved.

<関連情報>

DAOベースの審議と投票による民主的ガバナンス、AIモデルにおける包括的な意思決定 Democratic governance through DAO-based deliberation and voting for inclusive decision making in AI models

Tanusree Sharma,Yujin Potter,Jongwon Park,Yiren Liu,Yun Huang,Sunny Liu,Dawn Song,Jeff Hancock & Yang Wang
Scientific Reports  Published:03 March 2026
DOI:https://doi.org/10.1038/s41598-026-40180-8

Abstract

A major criticism of AI development is the lack of transparency, such as, inadequate documentation and traceability in its design and decision-making processes, leading to adverse outcomes including discrimination, lack of inclusivity and representation, and breaches of legal regulations. Underserved populations, in particular, are disproportionately affected by these design decisions. Furthermore, traditional social science techniques such as interviews, focus groups, and surveys struggle to adequately capture user needs and expectations in the digital era, due to their inherent limitations in deliberation, consensus-building, and providing consistent insights. We developed a democratic decision framework utilizing Decentralized Autonomous Organization (DAO) to enable underserved groups to deliberate and reach a consensus on key AI issues. To assess our proposed democratic decision mechanism, we conducted a case study on updating AI model specification based on diverse stakeholders input. We focus on reducing stereotypical biases in text-to-image systems, particularly gender bias in image generation from text prompts. We designed and experimented various governance configurations, including decision aggregation schemes and decision power, to examine how democratic processes could guide updates to AI model. Through a 2 × 2 experimental design, we tested various aggregation schemes (ranked vs. quadratic) and decision power distribution (equal vs. 20/80 differential) in a randomized online experiment (n=177) with participants from the global south and people with disabilities, to study how the varying governance mechanisms impact people’s perceptions of the decision-making processes and resulting output of the AI Model specification. Our results indicate that despite their diverse backgrounds, participants showed convergence in deliberations on several aspects, including user control over image generation, multiple output options for user selection, and the social appropriateness and accuracy of generated images. Our study underscores the importance of use of appropriate governance in democratic decision-making in AI alignment. Notably, the combination of quadratic preference aggregation method which gives minorities more voice and equal decision power distribution, was perceived as a fairer and democratic approach.

1603情報システム・データ工学
ad
ad
Follow
ad
タイトルとURLをコピーしました