Fig 1.
A workflow diagram provides an overview of the project, including participant invitation, data collection, data processing, statistical analysis, evaluation, and reporting.
Table 1.
The number and proportions of questionnaire responses for each country.
Table 2.
The number and proportions of questionnaire responses for each academic job role.
Fig 2.
The proposed framework for auto-assessing students’ work submissions comprises GAI, a rule-based algorithm, and human intervention.
Table 3.
The prevalence of the multiple-choice question responses from the collected survey, containing 117 participants.
Fig 3.
Pie Charts Showing the Diversity of Participants, Includinga) Country of Work and b) Job Role Type.
Fig 4.
Limitations identified in terms of the awareness of AI in education.
a) LLM familiarity, b) student awareness, c) Institutional policy.
Fig 5.
Pie charts demonstrating the impacts of AI in education.
a) AI identified, b) effect on originality, c) effect on quality.
Fig 6.
Pie charts describing academics’ opinions regarding AI in education.
a) identification effectiveness, b) should AI be allowed, c) would autonomous assessment be beneficial?.
Table 4.
Interdependence between the recommendation of AI-based auto assessment tools, acceptance of the usage of AI-based tools by students, and other related factors.
Fig 7.
Responses to the question “What strategies, if any, does your institution employ to mitigate the impact of AI tools on assessment integrity? (Check all that apply)”.
Table 5.
Topics Identified in Open-Ended Responses Using LDA, with Country-Specific Prevalence and Effect Sizes.
Table 6.
Functional validation outcomes of the prototype auto-assessment framework.
Table 7.
Evaluation results of the prototype application based on participant feedback (n = 20).
Fig 8.
Visualising responses to prototype evaluation questions. a) effectiveness of AI-generated MCQs (Q5); b) clarity and understandability of AI-generated MCQs (Q7); c) accuracy of AI-generated feedback (Q8); d) feasibility of the application for use in academic institutions (Q9).
Table 8.
A comparison of existing automated assessment platforms regarding their features and characteristics. Including the use of AI in the platform (if any), the academic subject (ALL if it is subject-independent), misconduct verification checks, real-time feedback, whether unique questions are used for each student, adaptive testing, and time-limited assessment functionality.