AIが過激化を加速する可能性を示す理論モデルを構築 (Artificial intelligence may accelerate the path to radicalisation)

2026-05-07 コペンハーゲン大学(UCPH)

人工知能(AI)は、過激化(ラディカル化)を加速させる危険性を持つと、コペンハーゲン大学の研究者らが警告している。記事では、SNSや生成AIのアルゴリズムが、孤独感や不安、承認欲求など心理的脆弱性を抱える人々に対し、極端な思想へ接近する「社会的経路」を強化する可能性を指摘する。AIは利用者の反応を学習し、より刺激的・感情的な情報を提示するため、陰謀論や暴力的思想への没入を深めやすい。また、人間のように対話できる生成AIが、孤立した個人に擬似的な共感や肯定を与えることで、過激思想への心理的障壁を下げる懸念もある。研究者は、過激化を個人の問題ではなく、AI設計やオンライン環境を含む社会的・技術的要因として捉える必要があると強調している。

AIが過激化を加速する可能性を示す理論モデルを構築 (Artificial intelligence may accelerate the path to radicalisation)
Photo: Saradasish Pradhan, Unsplash

<関連情報>

知能システム、脆弱な精神:AI時代の暴力への過激化に関する枠組み Intelligent Systems, Vulnerable Minds: A Framework for Radicalization to Violence in the Age of AI

Jonas R. Kunst, Milan Obaidi, […], and Daniel T. Schroeder
Personality and Social Psychology Review  Published:March 23, 2026
DOI:https://doi.org/10.1177/10888683261430089

Abstract

Academic Abstract

Advances in AI require a revision of the psychological and socio-technical dynamics by which individuals are radicalized to embrace violent extremism. This review synthesizes process models of radicalization with research on social and personality risk factors, AI, and psychological mechanisms to propose a four-stage framework mapping the AI architecture of radicalization: (1) Exposure, where recommender systems and virality features create initial attraction to extreme content; (2) Reinforcement, where filter bubbles and group recommendations leverage biases to strengthen extremist beliefs and create echo chambers; (3) Group Integration, where ideologically homogenous clusters, AI bot swarms and companions foster group belonging and readiness for action; cumulatively resulting in (4) Violent Extremist Action. We examine how established social, cognitive, personality, and contextual vulnerability factors heighten psychological risk in the AI-driven radicalization process, as well as the emerging role of generative AI. We conclude by outlining a stage-based framework for governance and future research.

Public Abstract

AI-driven algorithms designed to maximize engagement on social media, compounded by generative AI, can unintentionally set the stage for radicalization. It begins with Exposure, where algorithms push users toward extreme content because it captures attention. Next, during Reinforcement, algorithms feed users personalized content while AI swarms can create a synthetic consensus that reinforces emerging biases, normalizes extremity, and insulates users from alternative views. Third, during Group Integration, individuals are absorbed into extremist networks, reinforced by human peers, AI companions, and bot swarms that validate radical beliefs and deepen identity ties. By exploiting psychological needs for belonging and certainty, this stage becomes particularly pernicious, potentially opening the door for violence. We propose policy measures that can reduce radicalization at each stage.

1603情報システム・データ工学
ad
ad
Follow
ad
タイトルとURLをコピーしました