TY - JOUR
T1 - Could ChatGPT get an engineering degree? Evaluating higher education vulnerability to AI assistants
AU - EPFL Grader Consortium
AU - EPFL Data Consortium
AU - Borges, Beatriz
AU - Foroutan, Negar
AU - Bayazit, Deniz
AU - Sotnikova, Anna
AU - Montariol, Syrielle
AU - Nazaretzky, Tanya
AU - Banaei, Mohammadreza
AU - Sakhaeirad, Alireza
AU - Servant, Philippe
AU - Neshaei, Seyed Parsa
AU - Frej, Jibril
AU - Romanou, Angelika
AU - Weiss, Gail
AU - Mamooler, Sepideh
AU - Chen, Zeming
AU - Fan, Simin
AU - Gao, Silin
AU - Ismayilzada, Mete
AU - Paul, Debjit
AU - Schwaller, Philippe
AU - Friedli, Sacha
AU - Jermann, Patrick
AU - Käser, Tanja
AU - Bosselut, Antoine
AU - Schöpfer, Alexandre
AU - Janchevski, Andrej
AU - Tiede, Anja
AU - Linden, Clarence
AU - Troiani, Emanuele
AU - Salvi, Francesco
AU - Behrens, Freya
AU - Orsi, Giacomo
AU - Piccioli, Giovanni
AU - Sevel, Hadrien
AU - Coulon, Louis
AU - Pineros-Rodriguez, Manuela
AU - Bonnassies, Marin
AU - Hellich, Pierre
AU - van Gerwen, Puck
AU - Gambhir, Sankalp
AU - Pirelli, Solal
AU - Blanchard, Thomas
AU - Callens, Timothée
AU - Aoun, Toni Abi
AU - Alonso, Yannick Calvino
AU - Cho, Yuri
AU - Radenovic, Aleksandra
AU - Alahi, Alexandre
AU - Mathis, Alexander
AU - Ribeiro, Manoel Horta
N1 - Publisher Copyright:
Copyright © 2024 the Author(s). Published by PNAS.
PY - 2024/12/3
Y1 - 2024/12/3
N2 - AI assistants, such as ChatGPT, are being increasingly used by students in higher education institutions. While these tools provide opportunities for improved teaching and education, they also pose significant challenges for assessment and learning outcomes. We conceptualize these challenges through the lens of vulnerability, the potential for university assessments and learning outcomes to be impacted by student use of generative AI. We investigate the potential scale of this vulnerability by measuring the degree to which AI assistants can complete assessment questions in standard university-level Science, Technology, Engineering, and Mathematics (STEM) courses. Specifically, we compile a dataset of textual assessment questions from 50 courses at the École polytechnique fédérale de Lausanne (EPFL) and evaluate whether two AI assistants, GPT-3.5 and GPT-4 can adequately answer these questions. We use eight prompting strategies to produce responses and find that GPT-4 answers an average of 65.8% of questions correctly, and can even produce the correct answer across at least one prompting strategy for 85.1% of questions. When grouping courses in our dataset by degree program, these systems already pass the nonproject assessments of large numbers of core courses in various degree programs, posing risks to higher education accreditation that will be amplified as these models improve. Our results call for revising program-level assessment design in higher education in light of advances in generative AI.
AB - AI assistants, such as ChatGPT, are being increasingly used by students in higher education institutions. While these tools provide opportunities for improved teaching and education, they also pose significant challenges for assessment and learning outcomes. We conceptualize these challenges through the lens of vulnerability, the potential for university assessments and learning outcomes to be impacted by student use of generative AI. We investigate the potential scale of this vulnerability by measuring the degree to which AI assistants can complete assessment questions in standard university-level Science, Technology, Engineering, and Mathematics (STEM) courses. Specifically, we compile a dataset of textual assessment questions from 50 courses at the École polytechnique fédérale de Lausanne (EPFL) and evaluate whether two AI assistants, GPT-3.5 and GPT-4 can adequately answer these questions. We use eight prompting strategies to produce responses and find that GPT-4 answers an average of 65.8% of questions correctly, and can even produce the correct answer across at least one prompting strategy for 85.1% of questions. When grouping courses in our dataset by degree program, these systems already pass the nonproject assessments of large numbers of core courses in various degree programs, posing risks to higher education accreditation that will be amplified as these models improve. Our results call for revising program-level assessment design in higher education in light of advances in generative AI.
KW - LLM
KW - education
KW - education vulnerability
KW - generative AI
UR - http://www.scopus.com/inward/record.url?scp=85211047484&partnerID=8YFLogxK
UR - http://www.scopus.com/inward/citedby.url?scp=85211047484&partnerID=8YFLogxK
U2 - 10.1073/pnas.2414955121
DO - 10.1073/pnas.2414955121
M3 - Article
C2 - 39589890
AN - SCOPUS:85211047484
SN - 0027-8424
VL - 121
JO - Proceedings of the National Academy of Sciences of the United States of America
JF - Proceedings of the National Academy of Sciences of the United States of America
IS - 49
M1 - e2414955121
ER -