2025-12-03 ワシントン大学(UW)
<関連情報>
- https://www.washington.edu/news/2025/12/03/social-media-research-tool-can-reduce-polarization-it-could-also-lead-to-more-user-control-over-algorithms/
- https://www.science.org/doi/10.1126/science.adu5584
アルゴリズムによるソーシャルメディアフィードにおける党派間の敵意の再ランキングは感情の二極化を変化させる Reranking partisan animosity in algorithmic social media feeds alters affective polarization
Tiziano Piccardi, Martin Saveski, Chenyan Jia, Jeffrey Hancock, […] , and Michael S. Bernstein
Science Published:27 Nov 2025
DOI:https://doi.org/10.1126/science.adu5584
Editor’s summary
Social media platforms’ opaque feed-ranking algorithms, which are designed to maximize engagement, may contribute to political polarization, body image, mental health, and other social issues. However, the evidence has rarely enabled conclusions about causality because external researchers need platforms’ permission to experimentally intervene. Collaborations with platforms also involve trade-offs that undermine researchers’ intellectual autonomy. Piccardi et al. circumvented this problem with a browser extension that intercepted feeds on X/Twitter in real time using a large language model (LLM) (see the Perspective by Allen and Tucker). Liberals and conservatives were randomly assigned to conditions where the LLM re-ranked feeds to up-rank or down-rank the visibility of hostile political content. Up-ranking increased political polarization, whereas down-ranking decreased it. This LLM tool enabled independent scholars to pull back the curtain on unresolved research questions without the platform’s permission. —Ekeoma Uzogara
Structured Abstract
INTRODUCTION
Social media algorithms profoundly impact our lives, yet our understanding of these algorithms’ impact has been limited to interventions that the platforms themselves are willing to test and publish. In this article, we demonstrate a method for independent algorithmic reranking experiments in naturalistic settings and at scale. We leverage this method to demonstrate that algorithmic ranking can both raise and lower levels of affective political polarization in the United States, with no detectable differences across party lines.
RATIONALE
We built a browser extension that intercepted and reranked web-based social media feeds in real time. The extension used a large language model (LLM) to score content and then applied these scores to reorder the feed, all without requiring platform cooperation. Our approach identified content expressing antidemocratic attitudes and partisan animosity (AAPA) and controlled users’ exposure to such content through reranking.
We conducted a preregistered field experiment with 1256 participants on X, the most widely used social media platform for political discourse in the United States, during the weeks leading up to the 2024 presidential election. Participants were randomly assigned to two parallel experiments in which their feeds were dynamically reranked for 1 week to either up- or down-rank AAPA posts. We measured the effects of these interventions on affective polarization (participants’ feelings toward the political outgroup) and emotional experience (anger, sadness, excitement, or calm) by using both in-feed and post-experiment surveys. Results were compared to those of participants in control conditions whose feeds were not reranked.
RESULTS
Reranking to change levels of exposure to AAPA content significantly influenced affective polarization. Increased AAPA exposure led to colder feelings toward the political outgroup, whereas decreased AAPA exposure led to warmer feelings. Both shifts amounted to more than 2 degrees on a 0- to 100- degree “feeling thermometer.” The impact on affective polarization was consistently detected in both in-feed and post-experiment surveys, with no evidence of partisan asymmetry. Changes to the feed algorithm caused in-feed but not post-experiment changes to participants’ negative emotions.
CONCLUSION
This work demonstrates a new method for reranking social media feeds that enables testing interventions without platform collaboration. We found that increasing exposure to AAPA posts significantly increases affective polarization and negative emotions, whereas decreasing exposure reduces them. These changes were comparable in size to 3 years of change in United States affective polarization. As political polarization and societal division become increasingly linked to social media activity, our findings provide a potential pathway for platforms to address these challenges through algorithmic interventions. Together, these interventions may result in algorithms that not only reduce partisan animosity but also promote greater social trust and healthier democratic discourse across party lines.
A platform-independent field experiment demonstrates that reranking content expressing antidemocratic attitudes and partisan animosity in social media feeds alters affective polarization.
A browser extension reranked participants’ social media feeds by reducing or increasing exposure to posts expressing AAPA, which in turn led to corresponding decreases or increases in affective polarization.
Abstract
Today, social media platforms hold the sole power to study the effects of feed-ranking algorithms. We developed a platform-independent method that reranks participants’ feeds in real time and used this method to conduct a preregistered 10-day field experiment with 1256 participants on X during the 2024 US presidential campaign. Our experiment used a large language model to rerank posts that expressed antidemocratic attitudes and partisan animosity (AAPA). Decreasing or increasing AAPA exposure shifted out-party partisan animosity by more than 2 points on a 100-point feeling thermometer, with no detectable differences across party lines, providing causal evidence that exposure to AAPA content alters affective polarization. This work establishes a method to study feed algorithms without requiring platform cooperation, enabling independent evaluation of ranking interventions in naturalistic settings.


