AIを活用して既存の研究レビューのスピードを向上 (Researchers Use AI to Speed Reviews of Existing Evidence)

ad

2025-03-12 トロント大学(U of T)

トロント大学の研究者たちは、人工知能(AI)を活用して既存の科学的証拠のレビューを迅速化する新しい方法を開発しました。 この技術は、従来数年かかっていた系統的レビューのプロセスを数週間に短縮する可能性があります。AIは大量の研究論文を迅速かつ効率的に分析し、研究者が重要な情報を抽出するのを支援します。これにより、医療や公衆衛生などの分野での意思決定が迅速化され、最新の科学的証拠に基づく政策立案や臨床ガイドラインの策定が可能となります。

<関連情報>

体系的レビューにおける大規模言語モデル駆動型スクリーニングのためのプロンプトテンプレートの開発 Development of Prompt Templates for Large Language Model–Driven Screening in Systematic Reviews

Christian Cao, MD(i), Jason Sang, BSc, Rohit Arora, BSc, PhD(i), David Chen, BSc, MD(i), Robert Kloosterman, BSc, MSc, MD(i), Matthew Cecere, BSc, MD(i), Jaswanth Gorla, BSc, MD(i), MHI, …, Niklas Bobrovitz, MSc, MD, DPhil
Annals of Internal Medicine 発行日:2025年2月25日
DOI:https://doi.org/10.7326/ANNALS-24-02189

Abstract

Background:
Systematic reviews (SRs) are hindered by the initial rigorous article screen, which delays access to reliable information synthesis.
Objective:
To develop generic prompt templates for large language model (LLM)–driven abstract and full-text screening that can be adapted to different reviews.
Design:
Diagnostic test accuracy.
Setting:
48 425 citations were tested for abstract screening across 10 SRs. Full-text screening evaluated all 12 690 freely available articles from the original search. Prompt development used the GPT4-0125-preview model (OpenAI).
Participants:
None.
Measurements:
Large language models were prompted to include or exclude articles based on SR eligibility criteria. Model outputs were compared with original SR author decisions after full-text screening to evaluate performance (accuracy, sensitivity, and specificity).
Results:
Optimized prompts using GPT4-0125-preview achieved a weighted sensitivity of 97.7% (range, 86.7% to 100%) and specificity of 85.2% (range, 68.3% to 95.9%) in abstract screening and weighted sensitivity of 96.5% (range, 89.7% to 100.0%) and specificity of 91.2% (range, 80.7% to 100%) in full-text screening across 10 SRs. In contrast, zero-shot prompts had poor sensitivity (49.0% abstract, 49.1% full-text). Across LLMs, Claude-3.5 (Anthropic) and GPT4 variants had similar performance, whereas Gemini Pro (Google) and GPT3.5 (OpenAI) models underperformed. Direct screening costs for 10 000 citations differed substantially: Where single human abstract screening was estimated to require more than 83 hours and $1666.67 USD, our LLM-based approach completed screening in under 1 day for $157.02 USD.
Limitations:
Further prompt optimizations may exist. Retrospective study. Convenience sample of SRs. Full-text screening evaluations were limited to free PubMed Central full-text articles.
Conclusion:
A generic prompt for abstract and full-text screening achieving high sensitivity and specificity that can be adapted to other SRs and LLMs was developed. Our prompting innovations may have value to SR investigators and researchers conducting similar criteria-based tasks across the medical sciences.

1600情報工学一般
ad
ad
Follow
ad
タイトルとURLをコピーしました