ChatGPTは工学の学位を取得できるか?(Could ChatGPT get an engineering degree?)

ad

2024-11-29 スイス連邦工科大学ローザンヌ校(EPFL)

EPFLの研究者たちは、AIアシスタントが教育に与える影響を調査し、GPT-4が大学の評価問題に最大85%の正答率を示すことを明らかにしました。この研究では、EPFLの50のコースから収集した資料を用いて、GPT-3.5とGPT-4の性能を評価しました。その結果、GPT-4は平均で65.8%の問題に正解し、特定のプロンプト戦略を用いると85.1%の問題に正答できることが判明しました。研究者たちは、AIシステムの使用が学生の学習プロセスやスキル習得に与える影響について懸念を示しており、特に学生がこれらのシステムを利用することで学習プロセスを省略し、基礎的なスキルの習得が不十分になる可能性を指摘しています。この研究は、教育機関が評価方法を再考し、学生が必要な概念を確実に学べるようにする必要性を強調しています。

<関連情報>

ChatGPTは工学の学位を取得できるか?AIアシスタントに対する高等教育の脆弱性を評価する Could ChatGPT get an engineering degree? Evaluating higher education vulnerability to AI assistants

Beatriz Borges, Negar Foroutan, Deniz Bayazit, +22, and EPFL Data Consortium
Proceedings of the National Academy of Sciences  Published:November 26, 2024
DOI:https://doi.org/10.1073/pnas.2414955121

ChatGPTは工学の学位を取得できるか?(Could ChatGPT get an engineering degree?)

Significance

Universities primarily evaluate student learning through various course assessments. Our study demonstrates that AI assistants, such as ChatGPT, can answer at least 65.8% of examination questions correctly across 50 diverse courses in the technical and natural sciences. Our analysis demonstrates that these capabilities render many degree programs (and their teaching objectives) vulnerable to potential misuse of these systems. These findings call for attention to assessment design to mitigate the possibility that AI assistants could divert students from acquiring the knowledge and critical thinking skills that university programs are meant to instill.

Abstract

AI assistants, such as ChatGPT, are being increasingly used by students in higher education institutions. While these tools provide opportunities for improved teaching and education, they also pose significant challenges for assessment and learning outcomes. We conceptualize these challenges through the lens of vulnerability, the potential for university assessments and learning outcomes to be impacted by student use of generative AI. We investigate the potential scale of this vulnerability by measuring the degree to which AI assistants can complete assessment questions in standard university-level Science, Technology, Engineering, and Mathematics (STEM) courses. Specifically, we compile a dataset of textual assessment questions from 50 courses at the École polytechnique fédérale de Lausanne (EPFL) and evaluate whether two AI assistants, GPT-3.5 and GPT-4 can adequately answer these questions. We use eight prompting strategies to produce responses and find that GPT-4 answers an average of 65.8% of questions correctly, and can even produce the correct answer across at least one prompting strategy for 85.1% of questions. When grouping courses in our dataset by degree program, these systems already pass the nonproject assessments of large numbers of core courses in various degree programs, posing risks to higher education accreditation that will be amplified as these models improve. Our results call for revising program-level assessment design in higher education in light of advances in generative AI.

1600情報工学一般
ad
ad
Follow
ad
タイトルとURLをコピーしました