RNAtranslator: Modeling protein-conditional RNA design as sequence-to-sequence natural language translation
Fig 8
Analysis of RNAtranslator Attentions by visualizing the cross attentions
This figure visualizes the cross-attention analysis of the decoder in RNAtranslator. The model is run on up to 1000 RNA–protein pairs per RNA-binding protein (RBP), collecting attention maps from all layers and heads. For each protein, the maximum attention weight received from any RNA position is attended. To assess the model’s focus on known RNA-binding domains, an attention ratio is defined as the maximum attention score within a known domain divided by the maximum score outside the domain. (A) Four representative RBPs exhibit attention ratios greater than one (dashed line), indicating that the model assigns higher attention to known binding regions. (B) When averaged across all RBPs, this attention ratio peaks in the middle decoder layers (L1–L4), suggesting these layers contribute most to identifying binding sites.