Skip to main content
Advertisement
Browse Subject Areas
?

Click through the PLOS taxonomy to find articles in your field.

For more information about PLOS Subject Areas, click here.

< Back to Article

Table 1.

Samples of various question types.

More »

Table 1 Expand

Table 2.

Statistical information of FQSD dataset.

More »

Table 2 Expand

Table 3.

Descriptive metrics for FQSD dataset.

More »

Table 3 Expand

Table 4.

Pearson correlation among annotators.

More »

Table 4 Expand

Table 5.

Interpretation ranges for Fleiss’s Kappa.

More »

Table 5 Expand

Fig 1.

Top 30 TF-IDF scores of nouns and noun phrases in FQSD.

More »

Fig 1 Expand

Fig 2.

Top 20 TF-IDF Scores of Adverbs/Adjectives in FQSD Categorized by Subjectivity Comparison-Form Classes: a) CO b) CS c) SO d) SS.

More »

Fig 2 Expand

Fig 3.

Histograms of TF-IDF Adverbs/Adjectives Scores in FQSD Categorized by Subjectivity Comparison-Form Classes: a) CO b) CS c) SO d) SS.

More »

Fig 3 Expand

Table 6.

Statistical information of Yu et al. [7] dataset.

More »

Table 6 Expand

Table 7.

Statistical information of ConvEx-DS dataset [9].

More »

Table 7 Expand

Table 8.

Statistical information of SubjQA dataset [8].

More »

Table 8 Expand

Fig 4.

Distribution of total question count and multi-sentence question count across the FQSD, ConvEx-DS, Yu et al., 2012, and SubjQA datasets, showcasing the size and structural analysis of each dataset.

More »

Fig 4 Expand

Fig 5.

Distribution of the total word count and unique word count across the FQSD, ConvEx-DS, Yu et al., 2012, and SubjQA datasets, showcasing the lexical richness of each dataset.

More »

Fig 5 Expand

Fig 6.

Distribution of the average words per sentence, average sentence length, average word length, and average syllables per word across the FQSD, ConvEx-DS, Yu et al., 2012, and SubjQA datasets, showcasing the linguistic complexity of each dataset.

More »

Fig 6 Expand

Fig 7.

Visualizing the average parse tree depth across the FQSD, ConvEx-DS, Yu et al., 2012, and SubjQA datasets, showcasing the syntactic complexity of each dataset.

More »

Fig 7 Expand

Fig 8.

Visualizing the Mean Dependency Distance (MDD) across the FQSD, ConvEx-DS, Yu et al., 2012, and SubjQA datasets, showcasing the dependency analysis of each dataset.

More »

Fig 8 Expand

Fig 9.

Visualizing the Root Type-Token Ratio (RTTR) and the Corrected Type-Token Ratio (CTTR) across the FQSD, ConvEx-DS, Yu et al., 2012, and SubjQA datasets, showcasing the lexical diversity of each dataset.

More »

Fig 9 Expand

Fig 10.

Visualizing the sparsity degree and total question count across the FQSD, ConvEx-DS, Yu et al., 2012, and SubjQA datasets, showcasing the data sparsity of each dataset.

More »

Fig 10 Expand

Table 9.

RoBERTa’s five-fold cross-validation evaluation on FQSD.

More »

Table 9 Expand

Fig 11.

LIME visualizations of word influence in model’s predictions for Instances 1 (Fig 11a), and 2 (Fig 11b) on the Yu et al. [7] dataset.

More »

Fig 11 Expand

Table 10.

Model performance across different dataset sizes (averaged over 5 runs using stratified 5-fold cross-validation).

More »

Table 10 Expand

Table 11.

Analysis of the proposed subjectivity classification model over five separate runs on the SUBJQA dataset [8].

More »

Table 11 Expand

Table 12.

Analysis of the proposed subjectivity-comparison form classification model over five separate runs on the ConvEx-DS dataset [9].

More »

Table 12 Expand

Table 13.

Evaluation of the proposed model (Trained on FQSD and tested on Yu et al. [7] dataset) vs. Yu et al. [7] model.

More »

Table 13 Expand

Table 14.

Comparative analysis of transformer models’ performance (LR = 1e-5) on the FQSC task across the FQSD dataset over five independent runs.

More »

Table 14 Expand

Table 15.

Comparative analysis of transformer models’ performance (LR = 1e-5) on the subjectivity-comparison form classification task across the ConvEx-DS dataset [9] over five independent runs.

More »

Table 15 Expand

Table 16.

Comparative analysis of transformer models’ performance (LR = 3e-5) on the subjectivity classification task across the SubjQA dataset [8] over five independent runs.

More »

Table 16 Expand

Table 17.

Comparative analysis of transformer models’ performance on fine-grained subjectivity tasks across multiple datasets over five independent runs.

More »

Table 17 Expand