2025-12-11 オックスフォード大学
Illustration of a ballot box surrounded by robots. Credit: mathiswork, Getty Images
<関連情報>
- https://www.ox.ac.uk/news/2025-12-11-study-reveals-how-conversational-ai-can-exert-influence-over-political-beliefs
- https://www.science.org/doi/10.1126/science.aec9293
人工知能による政治的説得
説得力のある人工知能に関する大規模な研究は、誤情報の広範な脅威を明らかにしている。
Political persuasion by artificial intelligence
Large-scale studies of persuasive artificial intelligence reveal an extensive threat of misinformation
Lisa P. Argyle
Science Published:4 Dec 2025
DOI:https://doi.org/10.1126/science.aec9293
Democratic systems of government depend on persuasion to gain and maintain authority. In an ideal world, policymakers and voters ought to consider the evidence supporting a range of viewpoints and change their opinions and actions to align with the “unforced force of the better argument” [(1), p. 159]. However, this ideal process only works if people are able to consider reliable information about many positions. Technological advances have layered another concern into this arena: Will artificial intelligence (AI) technologies supercharge the spread of misinformation and the manipulation of public opinion to the detriment of democratic governance? Hackenburg et al. (2), on page 1016 of this issue, and Lin et al. (3) report a varying capacity of generative large language models (LLMs) to persuade citizens about political matters. These studies find that AI can be effectively—although not extraordinarily—persuasive, and they raise important concerns about the scope and effect of AI-generated misinformation.
LLMs are advanced statistical models that generate text by predicting the probability of the next word in a given sequence. When trained at incredible scale, and typically post-trained or “aligned” to improve performance or reduce unwanted behaviors, “frontier” LLMs are capable of having diverse, responsive, and natural conversations with human counterparts. A growing literature demonstrates that these models, in addition to their many other uses, are highly proficient in producing persuasive text about political topics (4, 5). As people increasingly interact directly with LLMs that are built into their search engines, operating systems, and other apps, the potential for AI to influence users’ political opinions—and, by extension, collective democratic outcomes—is further amplified.
Hackenburg et al. and Lin et al. conducted large-scale experiments in which survey respondents each had one short, text-based, and multiturn interaction with an LLM that was instructed to persuade the human respondent about a political issue or candidate. Hackenburg et al. conducted more than 77,000 surveys of UK-based respondents, testing the relative persuasiveness of 19 different LLMs and eight different persuasive strategies across ~700 political issues. Lin et al. tested the ability of LLMs to persuade more than 5800 people about candidates for president or prime minister during elections in the US, Canada, and Poland and 500 people about a local ballot measure in the US. Both Hackenburg et al. and Lin et al. asked respondents to rate the relevant issue or candidate on a 0 to 100 scale before and after the conversation, and both found that interactions with state-of-the-art LLMs move attitudes about a specific political issue roughly 10 points. Additionally, Lin et al. compared issue-based persuasion with persuasion about candidates for public office and report that the effects of LLM persuasion on attitudes toward candidates are less consistent and are several points smaller on average.


