Fig 1.
Example of distortions and light inconsistencies of a logo (b) rendered on fabric.
(a) Aligned logos with added warp, shear, and noise. (b) Original logo.
Table 1.
High-level overview of the articles reviewed, categorized by their methodology, data source, general approach, and the public availability of their implementation.
Fig 2.
Taxonomy overview showing categories of FRDP methods.
Lower boxes represent examples of the category.
Table 2.
Publicly available datasets found in the literature.
Fig 3.
A schematic of the one-time calibration phase, where operator input is used to generate an optimized super-template and configuration.
Fig 4.
The real-time inference phase, where the saved configuration is used to quickly detect all patterns.
Fig 5.
The complete pipeline for FRPD.
The super-resolution, matching, filtering and alignment steps can be iterated to refine the detection and generate a high-quality super-template.
Fig 6.
The manual template selection process and the resulting cropped template.
(a) Template selection process via GUI. (b) The selected initial template.
Fig 7.
The calibration process, which is repeated for different configurations of hyperparameters.
Fig 8.
The streamlined inference process utilizes the calibrated configuration to detect fine candidates.
Fig 9.
Cropped samples from our synthetic dataset.
Each image consists of logos overlaid onto a fabric-like texture, with various types of synthetic noise and augmentations applied.
Table 3.
Test metrics on the synthetic dataset (average of 5 runs).
Std. dev. is reported in brackets. means lower is better, while
means higher is better.
Fig 10.
Comparison of RMSE (alignment) error over three tested methods (median = red line, mean = red circles).
Table 4.
Test statistic for the hypothesis that our method has a lower precision error, based on measurements from Table 3.
Fig 11.
Fitness scores using various inference hyperparameters.
Fig 12.
Inference time in seconds across different configurations.
These are defined by the use of tiling (tl), partial affine estimation (pa), and various settings for the DIS optical flow method. Error bars indicate the 95th percentile of the measured times.
Fig 13.
Effect of the induction step (in orange) during inference.
Median scores over 5 runs are reported, with 95th percentile error bars, using the default inference configuration.
Fig 14.
Comparison of completeness and fitness using 0, 1, or 2 iterations of super-resolution followed by matching-filtering-registration.
Median scores are reported over 5 runs. (a) Completeness (COMP) values with different iterations. (b) Fitness (FIT) values with different iterations.
Fig 15.
The effect of the various transformations, visualized on a cropped area from the python synthetic image.
The type of augmentation is indicated below each image.
Fig 16.
Boxplot showing the RMSE distribution over five different calibration configurations (each corresponding to a different input image rescale-factor), grouped by degradation type, with each configuration repeated five times.
None represents the original image with no degradation.
Table 5.
Increase in RMSE (%) on transformed images compared to the baseline.
Lower values indicate higher robustness.
Fig 17.
Comparison of pattern detection results on a real fabric image.
(a) TM++ detection. (b) Detection with our method.
Table 6.
Metrics on the manually annotated dataset of real images (average).
Std. dev. is reported in brackets.
Fig 18.
Example of a manually annotated real fabric image.
Fig 19.
Detection of a floral pattern.
Fig 20.
Detection of a complex floral pattern with several repeated elements (high-contrast for visibility).