AI Versus Human Graders: Assessing the Role of Large Language Models in Higher Education

Ragolane, Mahlatse and Patel, Shahiem and Salikram, Pranisha (2024) AI Versus Human Graders: Assessing the Role of Large Language Models in Higher Education. Asian Journal of Education and Social Studies, 50 (10). pp. 244-263. ISSN 2581-6268

[thumbnail of Ragolane50102024AJESS125085.pdf] Text
Ragolane50102024AJESS125085.pdf - Published Version

Download (618kB)

Abstract

While AI grading is seeing an increase in use and adoption, traditional educational practices are also forced to adapt and function together with AI, especially in assessment grading. In retrospect, human grading, on the other hand, has long been the cornerstone of educational assessment. Traditionally, educators have assessed student work based on established criteria, providing feedback intended to support learning and development. While human grading offers nuanced understanding and personalized feedback, it is also subject to limitations such as grading inconsistencies, biases, and significant time demands. This paper explores the role of large language models (LLMs), such as ChatGPT-3.5 and ChatGPT-4, in grading processes in higher education and compares their effectiveness with that of traditional human grading methods. The study uses both qualitative and quantitative methodologies, and the research extends across multiple academic programs and modules, providing a comprehensive assessment of how AI can complement or replace human graders. In study 1, we focused on (n=195) scripts in (n=3) modules and compared GPT 3.5, GPT 4, and human graders. Manually marked scripts exhibited an average of 24%-mark difference. Subsequently, (n=20) scripts were assessed using GPT-4, which provided a more precise evaluation, a total average of 4% difference in results. There were individual instances where marks were higher, but this could not naturally be a marker judgment. In Study 2, the results from the first study highlighted the need for a comprehensive memorandum; thus, we identified (n=4341), among which (n=3508) scripts were used. The study found that AI remains efficient when the memorandum is well-structured. Furthermore, the study found that while AI excels in scalability, human graders excel in interpreting complex answers, evaluating creativity, and picking up plagiarism. In Study 3, we evaluated formative assessments in GPT 4 (statistics n=602, Business Statistics n=859 and Logistics Management n=522). The third study demonstrated that AI marking tools can effectively manage the demands of formative assessments, particularly in modules where the questions are objective and structured, such as Statistics and Logistics Management. The first error in Statistics 102 highlighted the importance of a well-designed memorandum. The study concludes that AI tools can effectively reduce the burden on educators but should be integrated into a hybrid model in which human markers and AI systems work in tandem to achieve fairness, accuracy, and quality in assessments. This paper contributes to ongoing debates about the future of AI in education by emphasizing the importance of a well-structured memorandum and human discretion in achieving balanced and effective grading solutions.

Item Type: Article
Subjects: Open Digi Academic > Social Sciences and Humanities
Depositing User: Unnamed user with email support@opendigiacademic.com
Date Deposited: 26 Oct 2024 06:23
Last Modified: 16 Apr 2025 12:50
URI: http://papers.sendtopublish.com/id/eprint/1580

Actions (login required)

View Item
View Item