Skip to main content
Advertisement
Browse Subject Areas
?

Click through the PLOS taxonomy to find articles in your field.

For more information about PLOS Subject Areas, click here.

< Back to Article

Fig 1.

The news articles dataset pre-processing steps are illustrated.

More »

Fig 1 Expand

Table 1.

A summary of Disaster-related Entity class names including a brief description and the source ontology is listed.

More »

Table 1 Expand

Fig 2.

A bar plot showing the number of 14 Disaster-related entity types (in hundreds) in the news articles dataset.

More »

Fig 2 Expand

Fig 3.

A pie diagram illustrating the percentage distribution of 14 Disaster-related entity types in the news articles dataset.

More »

Fig 3 Expand

Table 2.

An example word tokenization and POS tagging of a training sentence.

More »

Table 2 Expand

Fig 4.

The word embdedding model development process explained.

| V | denotes the cardinal of the vocabulary (i.e. the different words in the corpus) and N denotes the dimension of the embedding vectors (i.e. the hidden layer has N dimensions) where C represents the context window’s size.

More »

Fig 4 Expand

Fig 5.

The histograms of the word embedding vector distributions are illustrated.

A. Contextual embedding B. Word2vec embedding.

More »

Fig 5 Expand

Fig 6.

The illustration of the character embedding model.

More »

Fig 6 Expand

Fig 7.

An illustration of the proposed BiLSTM-ATTN-CRF model architecture for disaster-related named entity recognition task.

More »

Fig 7 Expand

Table 3.

Number of sentences, words, and characters in Training and Test data.

More »

Table 3 Expand

Table 4.

Distribution of major disaster-related named entities in training and test data.

More »

Table 4 Expand

Table 5.

Example of tokenization and labeling in a sample sentence.

More »

Table 5 Expand

Table 6.

The list of Hyper-parameters with corresponding ranges and optimal values.

More »

Table 6 Expand

Table 7.

Results on test data for different model configurations.

More »

Table 7 Expand

Table 8.

BiLSTM-ATTN-CRF model performances using different word-level embeddings.

More »

Table 8 Expand

Table 9.

BiLSTM-ATTN-CRF performances with various feature combinations. In column 1, “All" indicates the combinations of baseline embeddings, POS Tagging, and casing features, whereas “Base embeddings" indicates the word and character embedding features combined.

More »

Table 9 Expand

Table 10.

Summary of performances in recent and current disaster-specific NER study.

More »

Table 10 Expand

Table 11.

NER performance comparative analysis for alternative model configurations.

More »

Table 11 Expand

Table 12.

NER performance comparative analysis for alternative dataset.

More »

Table 12 Expand