AIバイアスには偏ったデータ以上のものがある、NISTレポートが指摘(There’s More to AI Bias Than Biased Data, NIST Report Highlights)


人工知能の偏りを根絶するには、人間やシステム上の偏りにも対処する必要があります。 Rooting out bias in artificial intelligence will require addressing human and systemic biases as well.

2022-03-16 米国国立標準技術研究所(NIST)

・”信頼できるAIシステムを開発するためには、AIに対する国民の信頼を削ぐあらゆる要因を考慮する必要があります。これらの要因の多くは、技術そのものにとどまらず、技術がもたらす影響にまで及びます。” -レヴァ・シュワルツ(AIバイアス担当主任研究員・これらの問題に対処するため、NISTの著者らは、AIにおける偏りを緩和するための「社会技術的」アプローチの必要性を訴えている。このアプローチでは、AIがより大きな社会的文脈の中で動作していること、そして偏りの問題を解決するための純粋に技術的な取り組みでは不十分であることを認識することが必要です。

AIバイアスには偏ったデータ以上のものがある、NISTレポートが指摘(There’s More to AI Bias Than Biased Data, NIST Report Highlights)

Bias in AI systems is often seen as a technical problem, but the NIST report acknowledges that a great deal of AI bias stems from human biases and systemic, institutional biases as well.
Credit: N. Hanacek/NIST


人工知能におけるバイアスの特定と管理のための標準化に向けて Towards a Standard for Identifying and Managing Bias in Artificial Intelligence

Reva Schwartz,Apostol Vassilev,Kristen Greene,Lori Perine,Andrew Burt, Patrick Hall ,BNH.AI

NIST Special Publication 1270 published:March 2022

Executive Summary

As individuals and communities interact in and with an environment that is increasingly virtual, they are often vulnerable to the commodification of their digital footprint. Concepts and behavior that are ambiguous in nature are captured in this environment, quantified, and used to categorize, sort, recommend, or make decisions about people’s lives. While many organizations seek to utilize this information in a responsible manner, biases remain endemic across technology processes and can lead to harmful impacts regardless of intent. These harmful outcomes, even if inadvertent, create significant challenges for cultivating public trust in artificial intelligence (AI). While there are many approaches for ensuring the technology we use every day is safe and secure, there are factors specific to AI that require new perspectives. AI systems are often placed in contexts where they can have the most impact. Whether that impact is helpful or harmful is a fundamental question in the area of Trustworthy and Responsible AI. Harmful impacts stemming from AI are not just at the individual or enterprise level, but are able to ripple into the broader society. The scale of damage, and the speed at which it can be perpetrated by AI applications or through the extension of large machine learning MODELs across domains and industries requires concerted effort.