Skip to main content
Advertisement
Browse Subject Areas
?

Click through the PLOS taxonomy to find articles in your field.

For more information about PLOS Subject Areas, click here.

< Back to Article

Fig 1.

Workflow of the offensive language detection methodology in Persian language.

More »

Fig 1 Expand

Table 1.

Shared tasks in identification of abusive language in different types and languages.

More »

Table 1 Expand

Fig 2.

Paper structure diagram.

More »

Fig 2 Expand

Table 2.

Distribution of annotated data in three levels of annotation schema.

A set of 6,000 out of 520,000 sampled data is randomly selected for annotation process.

More »

Table 2 Expand

Fig 3.

Tweet samples (original and translated) from the annotated data with their categories for each level of the annotation schema.

More »

Fig 3 Expand

Table 3.

Baselines ML models.

More »

Table 3 Expand

Table 4.

Baselines DL models.

More »

Table 4 Expand

Table 5.

Description of the transformer-based neural network models used in identification of offensive language in Persian.

More »

Table 5 Expand

Fig 4.

Diagram of the stacking K-Fold cross validation.

More »

Fig 4 Expand

Fig 5.

Preprocessing steps of the dataset.

More »

Fig 5 Expand

Table 6.

Results for offensive language identification (first level).

The bold and underlined numbers represent the first and second best scores, respectively, in each category: classical ML, DL, and transformer-based neural networks.

More »

Table 6 Expand

Table 7.

Results for targeted offensive language identification (second level).

The bold and underlined numbers represent the first and second best scores, respectively, in each category: classical ML, DL, and transformer-based neural networks.

More »

Table 7 Expand

Table 8.

Results for target type of offensive language identification (third level).

The bold and underlined numbers represent the first and second best scores, respectively, in each category: classical ML, DL, and transformer-based neural networks.

More »

Table 8 Expand

Fig 6.

Pairwise Pearson Correlation Coefficient between the predicted probabilities of different single classifiers on out-of-fold test set.

First level (a) shows the correlation between the output predictions of classifiers trained on offensive vs non-offensive annotated data. Second level (b) shows the correlation between the output predictions of classifiers trained on targeted vs untargeted samples. Third level (c) shows the correlation between the output predictions of classifiers trained on targeted offensive towards individual or group.

More »

Fig 6 Expand

Fig 7.

Offensive language identification performance among all models in three levels of annotation.

First level (a), Second level (b), and Third level (c) indicate performance of selected base-level classifiers accompanying stacking ensemble classifier in identification of offensive vs non-offensive, targeted vs untargeted offensive content, and the target of offensive language towards individual or group, respectively.

More »

Fig 7 Expand