Abstract:
Existing prompt-based few-shot relation extraction methods are facing prototype bias issues when dealing with outlier samples. To address this issue, we propose a prompt-enhanced few-shot relation extraction universal optimal framework for prototype calibration. Firstly, we designed learnable dynamic prompt templates that synergistically integrate dual-modality prior knowledge from relation descriptions and virtual entity types, to enhance the reasoning and learning capabilities of pre-trained model. Then, contrastive learning paradigm was introduced to refine the prototype representations, which adopts cross-instance semantic alignment through optimizing the prompt-induced feature space. Finally, by employing an instance-level multi-gate relation attention mechanism, the relation description information and prototype representations were fused to allow the model to perceive class distributions from both local instances and global information, which can alleviate the bias of prototype construction in few-shot scenarios. The proposed model-unrelated three-stage framework can be easily transferred to different pre-trained models. Experimental results on FewRel 1.0 & 2.0 datasets verify that the proposed method achieves state-of-the-art performance compared to the baseline models.