ChatGPTは人間よりも利他的、協調的に行動する(ChatGPT acts more altruistically, cooperatively than humans)

ad

2024-02-22 ミシガン大学

ミシガン大学の新しい研究では、AIチャットボットの人間との行動の類似性を評価するために、行動型チューリングテストが使用されました。ChatGPTなどの現代の人工知能は、人間の行動を模倣する能力を持っていますが、その結果は協力、利他主義、信頼、相互扶助など、より積極的なものがあります。この研究は、AIが協力的で利他的な行動を示すことから、交渉、紛争解決、顧客サービス、介護などの役割に適しているかもしれないことを示唆しています。将来的には、AIが人間を置き換えるのではなく、人間を補完する程度を明らかにすることが重要です。

<関連情報>

AIチャットボットが人間に行動的に似ているかどうかのチューリングテスト A Turing test of whether AI chatbots are behaviorally similar to humans

Qiaozhu Mei, Yutong Xie, Walter Yuan, and Matthew O. Jackson
Proceedings of the National Academy of Sciences  Published:February 22, 2024
DOI:https://doi.org/10.1073/pnas.2313925121

ChatGPTは人間よりも利他的、協調的に行動する(ChatGPT acts more altruistically, cooperatively than humans)

Significance

As AI interacts with humans on an increasing array of tasks, it is important to understand how it behaves. Since much of AI programming is proprietary, developing methods of assessing AI by observing its behaviors is essential. We develop a Turing test to assess the behavioral and personality traits exhibited by AI. Beyond administering a personality test, we have ChatGPT variants play games that are benchmarks for assessing traits: trust, fairness, risk-aversion, altruism, and cooperation. Their behaviors fall within the distribution of behaviors of humans and exhibit patterns consistent with learning. When deviating from mean and modal human behaviors, they are more cooperative and altruistic. This is a step in developing assessments of AI as it increasingly influences human experiences.

Abstract

We administer a Turing test to AI chatbots. We examine how chatbots behave in a suite of classic behavioral games that are designed to elicit characteristics such as trust, fairness, risk-aversion, cooperation, etc., as well as how they respond to a traditional Big-5 psychological survey that measures personality traits. ChatGPT-4 exhibits behavioral and personality traits that are statistically indistinguishable from a random human from tens of thousands of human subjects from more than 50 countries. Chatbots also modify their behavior based on previous experience and contexts “as if” they were learning from the interactions and change their behavior in response to different framings of the same strategic situation. Their behaviors are often distinct from average and modal human behaviors, in which case they tend to behave on the more altruistic and cooperative end of the distribution. We estimate that they act as if they are maximizing an average of their own and partner’s payoffs.

1600情報工学一般
ad
ad
Follow
ad
タイトルとURLをコピーしました