• Overview of Chinese core journals
  • Chinese Science Citation Database(CSCD)
  • Chinese Scientific and Technological Paper and Citation Database (CSTPCD)
  • China National Knowledge Infrastructure(CNKI)
  • Chinese Science Abstracts Database(CSAD)
  • JST China
  • SCOPUS
LIU Peng-fei, ZHANG Wei-feng, HE Ke-jing. Research on graph attention network ensemble optimized by differential evolution algorithm[J]. Journal of Yunnan University: Natural Sciences Edition, 2022, 44(1): 41-48. DOI: 10.7540/j.ynu.P00152
Citation: LIU Peng-fei, ZHANG Wei-feng, HE Ke-jing. Research on graph attention network ensemble optimized by differential evolution algorithm[J]. Journal of Yunnan University: Natural Sciences Edition, 2022, 44(1): 41-48. DOI: 10.7540/j.ynu.P00152

Research on graph attention network ensemble optimized by differential evolution algorithm

  • In order to further improve the performance and robustness of graph classification algorithm, a graph attention network ensemble optimized by differential evolution algorithm is proposed. Firstly, by dividing the original samples, different base learners pay attention to different regions of the data. Secondly, using the good search ability of differential evolution algorithm, the weight vector of base learner is optimized with the classification error rate of classifier ensemble as the objective functio. Finally, based on the weight vector, the output of each base learner is synthesized as the overall output of classifier ensemble. The citation data set Cora is introduced to verify the experiment; Compared with the basic graph attention network model, the classification performance and robustness of the proposed ensemble algorithm are improved to some extent. When the hyperparameter is fixed, its accuracy is 0.001 ~ 0.011 higher than the average accuracy of the internal base learners, and which is the same as or ahead of the majority voting classifier ensemble with a gap of 0 ~ 0.005; In the case of random hyperparameters, its accuracy is 0.053 ~ 0.173 higher than the average accuracy of internal base learners, and which is 0.003 ~ 0.006 ahead of the majority voting classifier ensemble; In addition, meaningful conclusions are also drawn from the analysis of ensemble training duration under parameter disturbance and data disturbance.
  • loading

Catalog

    Turn off MathJax
    Article Contents

    /

    DownLoad:  Full-Size Img  PowerPoint
    Return
    Return