Skip to main content
Advertisement
Browse Subject Areas
?

Click through the PLOS taxonomy to find articles in your field.

For more information about PLOS Subject Areas, click here.

< Back to Article

Table 1.

Recall, precision, and F1-score in the binary case.

More »

Table 1 Expand

Table 2.

Emotions considered in bilingual emotion recognition with a common model set.

More »

Table 2 Expand

Fig 1.

Computation of shifted delta cepstral (SDC) coefficients.

More »

Fig 1 Expand

Fig 2.

Architecture of the proposed convolutional neural networks-based classifier.

More »

Fig 2 Expand

Table 3.

Spoken language identification rates [%] using English and German emotional speech data.

More »

Table 3 Expand

Table 4.

Recalls for speech emotion recognition using IEMOCAP and DNN.

More »

Table 4 Expand

Table 5.

Recalls for speech emotion recognition using IEMOCAP and CNN.

More »

Table 5 Expand

Table 6.

Precision of speech emotion recognition using IEMOCAP and DNN.

More »

Table 6 Expand

Table 7.

Precision of speech emotion recognition using IEMOCAP and CNN.

More »

Table 7 Expand

Table 8.

F1-scores for speech emotion recognition using IEMOCAP and DNN.

More »

Table 8 Expand

Table 9.

F1-scores for speech emotion recognition using IEMOCAP and CNN.

More »

Table 9 Expand

Table 10.

Confusion matrix [%] using IEMOCAP and DNN with MFCC/SDC features.

More »

Table 10 Expand

Table 11.

Confusion matrix [%] using IEMOCAP and CNN with MFCC/SDC features.

More »

Table 11 Expand

Table 12.

Recalls for speech emotion recognition using FAU Aibo and DNN.

More »

Table 12 Expand

Table 13.

Recalls for speech emotion recognition using FAU Aibo and CNN.

More »

Table 13 Expand

Table 14.

Precision of speech emotion recognition using FAU Aibo and DNN.

More »

Table 14 Expand

Table 15.

Precision of speech emotion recognition using FAU Aibo and CNN.

More »

Table 15 Expand

Table 16.

F1-scores for speech emotion recognition using FAU Aibo and DNN.

More »

Table 16 Expand

Table 17.

F1-scores for speech emotion recognition using FAU Aibo and CNN.

More »

Table 17 Expand

Table 18.

Confusion matrix [%] using FAU Aibo and DNN with MFCC/SDC features.

More »

Table 18 Expand

Table 19.

Confusion matrix [%] using FAU Aibo and CNN with MFCC/SDC features.

More »

Table 19 Expand

Table 20.

Recalls for speech emotion recognition using a common model set and DNN.

More »

Table 20 Expand

Table 21.

Recalls for speech emotion recognition using a common model set and CNN.

More »

Table 21 Expand

Table 22.

Precision of speech emotion recognition using a common model set and DNN.

More »

Table 22 Expand

Table 23.

Precision of speech emotion recognition using a common model set and CNN.

More »

Table 23 Expand

Table 24.

F1-scores for speech emotion recognition using a common model set and DNN.

More »

Table 24 Expand

Table 25.

F1-scores for speech emotion recognition using a common model set and CNN.

More »

Table 25 Expand

Table 26.

Training and test instances for the IEMOCAP corpus.

More »

Table 26 Expand

Table 27.

Confusion matrix [%] of the spoken language identification in the first pass.

More »

Table 27 Expand

Fig 3.

UARs for multilingual and monolingual emotion recognition for three languages.

More »

Fig 3 Expand