刘鹏飞, 张伟峰, 何克晶. 差分进化算法优化的图注意力网络集成研究[J]. 云南大学学报(自然科学版), 2022, 44(1): 41-48. doi: 10.7540/j.ynu.P00152
引用本文: 刘鹏飞, 张伟峰, 何克晶. 差分进化算法优化的图注意力网络集成研究[J]. 云南大学学报(自然科学版), 2022, 44(1): 41-48. doi: 10.7540/j.ynu.P00152
LIU Peng-fei, ZHANG Wei-feng, HE Ke-jing. Research on graph attention network ensemble optimized by differential evolution algorithm[J]. Journal of Yunnan University: Natural Sciences Edition, 2022, 44(1): 41-48. DOI: 10.7540/j.ynu.P00152
Citation: LIU Peng-fei, ZHANG Wei-feng, HE Ke-jing. Research on graph attention network ensemble optimized by differential evolution algorithm[J]. Journal of Yunnan University: Natural Sciences Edition, 2022, 44(1): 41-48. DOI: 10.7540/j.ynu.P00152

差分进化算法优化的图注意力网络集成研究

Research on graph attention network ensemble optimized by differential evolution algorithm

  • 摘要: 为进一步提升图分类算法的性能和稳健性,提出了差分进化算法优化的图注意力网络集成. 首先,通过划分原始样本让不同的基学习器关注数据的不同区域;其次,利用差分进化算法良好的搜索能力,以分类器集成的分类错误率为目标函数优化基学习器的权重向量;最后,在权重向量基础上综合各基学习器的输出作为分类器集成的总体输出. 实验引入引文数据集Cora进行验证,与基础的图注意力网络模型相比,所提出的集成算法的分类性能和稳健性有一定的改进. 在固定超参数时其准确率比内部基学习器平均准确率高0.001~0.011,以0~0.005的差距持平或领先于多数投票法分类器集成;在随机超参数时其准确率比内部基学习器平均准确率高0.053~0.173,以0.003~0.006的优势领先于多数投票法分类器集成;此外在参数扰动和数据扰动下的集成训练时长分析也得出了有意义的结论.

     

    Abstract: In order to further improve the performance and robustness of graph classification algorithm, a graph attention network ensemble optimized by differential evolution algorithm is proposed. Firstly, by dividing the original samples, different base learners pay attention to different regions of the data. Secondly, using the good search ability of differential evolution algorithm, the weight vector of base learner is optimized with the classification error rate of classifier ensemble as the objective functio. Finally, based on the weight vector, the output of each base learner is synthesized as the overall output of classifier ensemble. The citation data set Cora is introduced to verify the experiment; Compared with the basic graph attention network model, the classification performance and robustness of the proposed ensemble algorithm are improved to some extent. When the hyperparameter is fixed, its accuracy is 0.001 ~ 0.011 higher than the average accuracy of the internal base learners, and which is the same as or ahead of the majority voting classifier ensemble with a gap of 0 ~ 0.005; In the case of random hyperparameters, its accuracy is 0.053 ~ 0.173 higher than the average accuracy of internal base learners, and which is 0.003 ~ 0.006 ahead of the majority voting classifier ensemble; In addition, meaningful conclusions are also drawn from the analysis of ensemble training duration under parameter disturbance and data disturbance.

     

/

返回文章
返回