Skip to main content
Advertisement
Browse Subject Areas
?

Click through the PLOS taxonomy to find articles in your field.

For more information about PLOS Subject Areas, click here.

< Back to Article

Table 1.

Representative patent examples from the UAV technology dataset.

More »

Table 1 Expand

Table 2.

Dataset characteristics and technological category distributions.

More »

Table 2 Expand

Fig 1.

Contrastive pre-training architecture for multi-label patent representations.

The system processes patent abstracts through a shared encoder to generate embeddings that are optimized using multi-label contrastive objectives. The similarity computation considers both instance-level relationships and label co-occurrence patterns, enabling the model to learn representations that capture technological relationships and domain-specific semantic structures.

More »

Fig 1 Expand

Fig 2.

Retrieval-augmented demonstration selection.

The system leverages contrastive embeddings to identify relevant patent demonstrations through multi-faceted similarity scoring. The retrieval process considers semantic similarity, technical domain alignment, and diversity constraints to select informative examples that guide multi-label classification decisions. Retrieved demonstrations are ordered to balance relevance and diversity, providing comprehensive coverage of the label space.

More »

Fig 2 Expand

Fig 3.

The system combines retrieved demonstrations with embedding-guided attention to perform multi-label patent classification.

The prediction module employs decomposed inference for each category while considering inter-label dependencies, adaptive thresholding based on uncertainty, and prototype-based fallback for sparse categories. The integration of contrastive embeddings, demonstration patterns, and language model reasoning enables robust classification with minimal labeled examples.

More »

Fig 3 Expand

Table 3.

Overall performance comparison under 5-shot setting.

More »

Table 3 Expand

Fig 4.

Overall performance comparison across evaluation metrics.

The proposed framework consistently outperforms baseline methods across Macro-F1, Micro-F1, LRAP, and Coverage Error metrics. Error bars represent standard deviations across 50 experimental episodes. Statistical significance indicators (***p < 0.001, **p < 0.01, *p < 0.05) show comparisons against our framework.

More »

Fig 4 Expand

Fig 5.

Few-shot learning curves showing performance vs. number of shots.

Our framework demonstrates superior few-shot learning capabilities across all shot settings. Left: Macro-F1 performance prioritizes rare category detection. Right: Micro-F1 performance emphasizes overall classification accuracy. Error bars represent standard deviations across 50 episodes.

More »

Fig 5 Expand

Table 4.

Computational efficiency comparison.

More »

Table 4 Expand

Table 5.

Ablation study: component contribution analysis.

More »

Table 5 Expand