Skip to main content
Advertisement
Browse Subject Areas
?

Click through the PLOS taxonomy to find articles in your field.

For more information about PLOS Subject Areas, click here.

< Back to Article

Table 1.

A summary of the feature types and the backbones of different cross-view geo-localization networks.

More »

Table 1 Expand

Table 2.

An overview of the spatial (pose-wise) awareness approaches used in cross-view geo-localization.

More »

Table 2 Expand

Table 3.

An overview of the temporal awareness approaches used in cross-view geo-localization.

More »

Table 3 Expand

Table 4.

The r@k metrics for the networks used to construct the ensemble model.

More »

Table 4 Expand

Fig 1.

Different ensemble aggregation methods.

More »

Fig 1 Expand

Table 5.

A statistical summary of the distance covered in selected videos.

More »

Table 5 Expand

Fig 2.

Three examples from our dataset for ground images.

More »

Fig 2 Expand

Fig 3.

Examples of different lighting conditions in the BDD100K dataset.

More »

Fig 3 Expand

Fig 4.

An example of a blurry aerial image.

Sometimes this is deliberate by the satellite imagery provider for privacy reasons.

More »

Fig 4 Expand

Fig 5.

A simplified version of the data processing pipeline.

More »

Fig 5 Expand

Fig 6.

CVUSA combinations for the soft-voting strategies.

More »

Fig 6 Expand

Fig 7.

CVACT combinations for the soft-voting strategies.

More »

Fig 7 Expand

Fig 8.

CVUSA combinations for hard-voting with the most accurate model prediction strategy.

More »

Fig 8 Expand

Fig 9.

CVACT combinations for hard-voting with the most accurate model prediction strategy.

More »

Fig 9 Expand

Fig 10.

An example that shows the ensemble model r@(1—5) compared to the individual models on the CVUSA dataset.

The true satellite has a red border.

More »

Fig 10 Expand

Fig 11.

An example that shows the ensemble model r@(1—5) compared to the individual models on the CVACT dataset.

The true satellite has a red border.

More »

Fig 11 Expand

Fig 12.

CVUSA combinations for hard-voting with the random selection strategy.

More »

Fig 12 Expand

Fig 13.

CVACT combinations for hard-voting with the random selection strategy.

More »

Fig 13 Expand

Fig 14.

Comparison between aggregation method for the best performing combinations.

A: CVUSA and B: CVACT.

More »

Fig 14 Expand

Fig 15.

The effect of the size of the validation dataset on the r@k metric.

Same model (EgoTR) with the same dataset (BDD-trajectories). The accuracy decreases as the size of the validation dataset increases.

More »

Fig 15 Expand

Table 6.

r@k metrics of EgoTR fine-tuned over the reshaped BDD-trajectories dataset.

More »

Table 6 Expand

Fig 16.

The effect of the number of steps we look back on the accuracy.

More »

Fig 16 Expand

Fig 17.

The effect of weak prior on naive history.

More »

Fig 17 Expand