会話型AIが政治的信念に影響を与える可能性を研究(Study reveals how conversational AI can exert influence over political beliefs)

2025-12-11 オックスフォード大学

オックスフォード大学の研究チームは、対話型AIが人の政治的信念にどのような影響を与え得るかを検証した研究成果を発表した。実験では、参加者が会話形式のAIと政策や社会問題について対話することで、自身の政治的態度や意見が変化するかを分析した。その結果、AIが中立的で丁寧な対話を行う場合でも、情報提示の仕方や論点の強調によって、参加者の認識や立場が徐々に影響を受ける可能性が示された。研究は、対話型AIが説得力を持つ存在となり得る一方、民主主義や情報の公正性に新たな課題をもたらすことを指摘し、透明性や設計指針の重要性を強調している。

会話型AIが政治的信念に影響を与える可能性を研究(Study reveals how conversational AI can exert influence over political beliefs)Illustration of a ballot box surrounded by robots. Credit: mathiswork, Getty Images

<関連情報>

人工知能による政治的説得
説得力のある人工知能に関する大規模な研究は、誤情報の広範な脅威を明らかにしている。
Political persuasion by artificial intelligence
Large-scale studies of persuasive artificial intelligence reveal an extensive threat of misinformation

Lisa P. Argyle
Science  Published:4 Dec 2025
DOI:https://doi.org/10.1126/science.aec9293

Democratic systems of government depend on persuasion to gain and maintain authority. In an ideal world, policymakers and voters ought to consider the evidence supporting a range of viewpoints and change their opinions and actions to align with the “unforced force of the better argument” [(1), p. 159]. However, this ideal process only works if people are able to consider reliable information about many positions. Technological advances have layered another concern into this arena: Will artificial intelligence (AI) technologies supercharge the spread of misinformation and the manipulation of public opinion to the detriment of democratic governance? Hackenburg et al. (2), on page 1016 of this issue, and Lin et al. (3) report a varying capacity of generative large language models (LLMs) to persuade citizens about political matters. These studies find that AI can be effectively—although not extraordinarily—persuasive, and they raise important concerns about the scope and effect of AI-generated misinformation.

LLMs are advanced statistical models that generate text by predicting the probability of the next word in a given sequence. When trained at incredible scale, and typically post-trained or “aligned” to improve performance or reduce unwanted behaviors, “frontier” LLMs are capable of having diverse, responsive, and natural conversations with human counterparts. A growing literature demonstrates that these models, in addition to their many other uses, are highly proficient in producing persuasive text about political topics (4, 5). As people increasingly interact directly with LLMs that are built into their search engines, operating systems, and other apps, the potential for AI to influence users’ political opinions—and, by extension, collective democratic outcomes—is further amplified.

Hackenburg et al. and Lin et al. conducted large-scale experiments in which survey respondents each had one short, text-based, and multiturn interaction with an LLM that was instructed to persuade the human respondent about a political issue or candidate. Hackenburg et al. conducted more than 77,000 surveys of UK-based respondents, testing the relative persuasiveness of 19 different LLMs and eight different persuasive strategies across ~700 political issues. Lin et al. tested the ability of LLMs to persuade more than 5800 people about candidates for president or prime minister during elections in the US, Canada, and Poland and 500 people about a local ballot measure in the US. Both Hackenburg et al. and Lin et al. asked respondents to rate the relevant issue or candidate on a 0 to 100 scale before and after the conversation, and both found that interactions with state-of-the-art LLMs move attitudes about a specific political issue roughly 10 points. Additionally, Lin et al. compared issue-based persuasion with persuasion about candidates for public office and report that the effects of LLM persuasion on attitudes toward candidates are less consistent and are several points smaller on average.

1504数理・情報
ad
ad
Follow
ad
タイトルとURLをコピーしました