Skip to main content
Advertisement
Browse Subject Areas
?

Click through the PLOS taxonomy to find articles in your field.

For more information about PLOS Subject Areas, click here.

< Back to Article

Fig 1.

A workflow diagram provides an overview of the project, including participant invitation, data collection, data processing, statistical analysis, evaluation, and reporting.

More »

Fig 1 Expand

Table 1.

The number and proportions of questionnaire responses for each country.

More »

Table 1 Expand

Table 2.

The number and proportions of questionnaire responses for each academic job role.

More »

Table 2 Expand

Fig 2.

The proposed framework for auto-assessing students’ work submissions comprises GAI, a rule-based algorithm, and human intervention.

More »

Fig 2 Expand

Table 3.

The prevalence of the multiple-choice question responses from the collected survey, containing 117 participants.

More »

Table 3 Expand

Fig 3.

Pie Charts Showing the Diversity of Participants, Includinga) Country of Work and b) Job Role Type.

More »

Fig 3 Expand

Fig 4.

Limitations identified in terms of the awareness of AI in education.

a) LLM familiarity, b) student awareness, c) Institutional policy.

More »

Fig 4 Expand

Fig 5.

Pie charts demonstrating the impacts of AI in education.

a) AI identified, b) effect on originality, c) effect on quality.

More »

Fig 5 Expand

Fig 6.

Pie charts describing academics’ opinions regarding AI in education.

a) identification effectiveness, b) should AI be allowed, c) would autonomous assessment be beneficial?.

More »

Fig 6 Expand

Table 4.

Interdependence between the recommendation of AI-based auto assessment tools, acceptance of the usage of AI-based tools by students, and other related factors.

More »

Table 4 Expand

Fig 7.

Responses to the question “What strategies, if any, does your institution employ to mitigate the impact of AI tools on assessment integrity? (Check all that apply)”.

More »

Fig 7 Expand

Table 5.

Topics Identified in Open-Ended Responses Using LDA, with Country-Specific Prevalence and Effect Sizes.

More »

Table 5 Expand

Table 6.

Functional validation outcomes of the prototype auto-assessment framework.

More »

Table 6 Expand

Table 7.

Evaluation results of the prototype application based on participant feedback (n = 20).

More »

Table 7 Expand

Fig 8.

Visualising responses to prototype evaluation questions. a) effectiveness of AI-generated MCQs (Q5); b) clarity and understandability of AI-generated MCQs (Q7); c) accuracy of AI-generated feedback (Q8); d) feasibility of the application for use in academic institutions (Q9).

More »

Fig 8 Expand

Table 8.

A comparison of existing automated assessment platforms regarding their features and characteristics. Including the use of AI in the platform (if any), the academic subject (ALL if it is subject-independent), misconduct verification checks, real-time feedback, whether unique questions are used for each student, adaptive testing, and time-limited assessment functionality.

More »

Table 8 Expand