Table 1.
The datasets and classification schemes used in the literature to develop question classifiers in various knowledge domains.
Table 2.
The datasets and the machine learning models used in the literature to develop question classifiers.
Table 3.
The performance of the winning machine learning models in different datasets in the literature.
Table 4.
The first-level topics in the Oracle SQL Expert exam and the number of questions in each topic in the dataset.
Table 5.
The factors and responses in the experiment.
Table 6.
Summary of the experiment results for the three performance metrics: weighted macro-average AUC (wAUC), weighted macro-average precision (wP), and weighted macro-average F1-score (wF1).
Fig 1.
The distributions of the weighted macro-average AUC values for groups of various combinations of feature representation schemes and machine learning models.
Fig 2.
The mean analysis of the weighted macro-average AUC values for FRS, MLM factors, and their interactions.
Fig 3.
The effect size analysis on the weighted macro-average AUC values in the four quantiles for the interactions between the FRS and MLM factors.
Fig 4.
The distributions of the weighted macro-average precision values for groups of various combinations of feature representation schemes and machine learning models.
Fig 5.
The mean analysis of the weighted macro-average precision values for FRS, MLM factors, and their interactions.
Fig 6.
The effect size analysis on the weighted macro-average precision values in the four quantiles for the interactions between the FRS and MLM factors.
Fig 7.
The distributions of the weighted macro-average F1 values for groups of various combinations of feature representation schemes and machine learning models.
Fig 8.
The mean analysis of the weighted macro-average F1 values for FRS, MLM factors, and their interactions.
Fig 9.
The effect size analysis on the weighted macro-average F1 values in the four quantiles for the interactions between the FRS and MLM factors.