一种基于提示增强的小样本关系抽取融合算法

Prompt-enhanced few-shot relation extraction algorithm

  • 摘要: 针对现有基于提示学习的小样本关系抽取方法面对离群样本的任务时存在原型偏差的问题,提出一种提示增强的小样本关系抽取融合通用优化框架对原型进行校正. 首先,设计可学习的动态提示模板,融合关系语义描述与虚拟实体类型双模态先验知识,强化预训练模型的学习推理能力;然后,构建对比学习框架优化关系实例的提示表示空间,通过跨实例语义对齐提升原型表征质量;最后,通过实例级多门关系注意力机制融合关系语义信息和原型表示,使模型能够从局部实例和全局信息两个角度感知类别分布,缓解小样本场景下原型构建的偏差问题. 该三级优化框架具有模型无关性的特点,易于在不同预训练模型之间迁移使用,在FewRel 1.0数据集与FewRel 2.0数据集上的对比实验表明,提出的优化框架相较于基线模型均达到了最优.

     

    Abstract: Existing prompt-based few-shot relation extraction methods are facing prototype bias issues when dealing with outlier samples. To address this issue, we propose a prompt-enhanced few-shot relation extraction universal optimal framework for prototype calibration. Firstly, we designed learnable dynamic prompt templates that synergistically integrate dual-modality prior knowledge from relation descriptions and virtual entity types, to enhance the reasoning and learning capabilities of pre-trained model. Then, contrastive learning paradigm was introduced to refine the prototype representations, which adopts cross-instance semantic alignment through optimizing the prompt-induced feature space. Finally, by employing an instance-level multi-gate relation attention mechanism, the relation description information and prototype representations were fused to allow the model to perceive class distributions from both local instances and global information, which can alleviate the bias of prototype construction in few-shot scenarios. The proposed model-unrelated three-stage framework can be easily transferred to different pre-trained models. Experimental results on FewRel 1.0 & 2.0 datasets verify that the proposed method achieves state-of-the-art performance compared to the baseline models.

     

/

返回文章
返回