融合属性嵌入与关系注意力的跨语言实体对齐

Cross-lingual entity alignment by fusing attribute embeddings and relation attention

  • 摘要: 目前知识图谱实体对齐的主流方法是通过图神经网络学习知识图谱的嵌入表示,并测量实体嵌入之间的相似性实现实体的对齐. 很多实体对齐方法只考虑知识图谱的结构信息和关系信息,却常常忽略了属性信息. 针对上述问题,提出了一种融合属性嵌入的实体对齐方法:融合属性信息的精简关系感知双图卷积网络模型. 首先,基于关系感知双图卷积网络的注意力机制提取知识图谱的关系信息;然后,利用带高速门的图卷积网络获取属性信息;最后,融合二者的嵌入信息以实现更高准确率的实体对齐. 在3个跨语言数据集上的实验结果表明,该方法通过融合知识图谱属性信息增强了实体表示能力,在3个数据集上Hits@1值相比原模型分别增长了6.42%、4.59%和1.98%,对齐效果明显优于目前主流的实体对齐方法.

     

    Abstract: The current mainstream method of entity alignment in knowledge graph is to learn the embedding representation of knowledge graph through graph neural network and measure the similarity between entity embeddings to achieve entity alignment. Many entity alignment methods only consider the structure information and relation information of knowledge graphs, while attribute information is often ignored. Aiming at the above problems, this paper proposes an entity alignment method for fusing attribute embeddings: Relation-aware Dual-Graph Lite Convolutional Network fusing Attribute, RDGLite-A. This method firstly extracts the relation information of the knowledge graph based on the attention mechanism, and then uses the graph convolutional network with highway to obtain attribute information, lastly integrates the embeddings information of the two to achieve higher-accuracy entity alignment. The experiment results on three cross-lingual datasets show that this method enhances the entity representation ability by fusing attribute information in knowledge graph. On 3 datasets, compared with the original model, the Hits@1 values have increased by 6.42%, 4.59% and 1.98%, respectively, and the alignment performance is significantly better than the current mainstream entity alignment methods.

     

/

返回文章
返回