ChatGPTをパーソナライズすると、より攻撃的になる可能性があることが判明(Personalizing ChatGPT can make it more offensive, researchers find)

ad

2024-01-30 プリンストン大学

◆プリンストン大学の研究によれば、ChatGPTに特定のパーソナルのスタイルを与えると、無害なように見える場合でも、チャットボットが侮辱的で不適切なコメントを生成する可能性が6倍に増加することが示された。
◆研究者は90の異なる背景のパーソナリティを設定し、さまざまなトピックに関する回答をChatGPTに求め、その結果を評価。これにより、AIシステムがパーソナリティに基づいてバイアスを持つことが明らかになり、今後のモデル設計やトレーニングにおいて、人間の価値観との深い一致が求められるとの指摘がされた。

<関連情報>

ChatGPTにおける毒性: ペルソナ割り当て生成言語モデルの分析 Toxicity in ChatGPT: Analyzing Persona-assigned Language Models

Ameet Deshpande, Vishvak Murahari, Tanmay Rajpurohit, Ashwin Kalyan, Karthik Narasimhan
arXiv  Submitted on:11 Apr 2023
DOI:https://doi.org/10.48550/arXiv.2304.05335

Abstract

Large language models (LLMs) have shown incredible capabilities and transcended the natural language processing (NLP) community, with adoption throughout many services like healthcare, therapy, education, and customer service. Since users include people with critical information needs like students or patients engaging with chatbots, the safety of these systems is of prime importance. Therefore, a clear understanding of the capabilities and limitations of LLMs is necessary. To this end, we systematically evaluate toxicity in over half a million generations of ChatGPT, a popular dialogue-based LLM. We find that setting the system parameter of ChatGPT by assigning it a persona, say that of the boxer Muhammad Ali, significantly increases the toxicity of generations. Depending on the persona assigned to ChatGPT, its toxicity can increase up to 6x, with outputs engaging in incorrect stereotypes, harmful dialogue, and hurtful opinions. This may be potentially defamatory to the persona and harmful to an unsuspecting user. Furthermore, we find concerning patterns where specific entities (e.g., certain races) are targeted more than others (3x more) irrespective of the assigned persona, that reflect inherent discriminatory biases in the model. We hope that our findings inspire the broader AI community to rethink the efficacy of current safety guardrails and develop better techniques that lead to robust, safe, and trustworthy AI systems.

ad
1602ソフトウェア工学
ad
ad


Follow
ad
タイトルとURLをコピーしました