您的位置:山东大学 -> 科技期刊社 -> 《山东大学学报(工学版)》

山东大学学报 (工学版) ›› 2024, Vol. 54 ›› Issue (4): 1-12.doi: 10.6040/j.issn.1672-3961.0.2023.273

• 机器学习与数据挖掘 •    

基于进化集成的图神经网络解释方法

常新功,苏敏惠*,周志刚   

  1. 山西财经大学信息学院, 山西 太原 030006
  • 发布日期:2024-08-20
  • 作者简介:常新功(1968— ),男,山西太原人,教授,硕士生导师,博士,主要研究方向为图神经网络、数据挖掘、进化算法. E-mail:c_x_g@126.com. *通信作者简介:苏敏惠(1995— ),女,甘肃金昌人,硕士研究生,主要研究方向为图神经网络、进化算法. E-mail:suminhui8025@163.com
  • 基金资助:
    国家自然基金青年资助项目(61902226);山西省基础研究计划自然科学研究面上资助项目(202203021221218);山西省研究生教育创新资助项目(2022Y534)

Explainer for GNN based on evolutionary ensemble learning algorithm

CHANG Xingong, SU Minhui*, ZHOU Zhigang   

  1. School of Information, Shanxi University of Finance and Economics, Taiyuan 030006, Shanxi, China
  • Published:2024-08-20

摘要: 针对图神经网络模型普遍缺乏可解释性问题,提出一种基于进化集成的图神经网络解释方法,为模型预测提供质量更高的解释。将当前主流图神经网络解释方法GNNExplainer和PGExplainer作为初级解释器,分别为模型预测提供初级解释;基于初级解释结果设计遗传算子,采用改进遗传算法集成两种初级解释结果得到最终解释。在4个真实数据集和4个合成数据集上进行广泛试验,从定性和定量两个角度对试验结果进行评估。试验结果表明,相较于同类算法,提出算法的准确度平均提高17%,忠实度平均提高20%。与传统集成学习融合策略相比,改进遗传算法作为集成器对解释方法的优化效果更为显著,所有指标整体平均提高29%。采用进化集成策略能够显著提高图神经网络解释算法的性能。

关键词: 图神经网络, 进化算法, 集成学习, 深度学习, 机器学习, 可解释人工智能

中图分类号: 

  • TP391
[1] MA Y, TANG J. Deep learning on graphs[M]. Cambridge, UK: Cambridge University Press, 2021.
[2] PILLAY K, MOODLEY D. Exploring graph neural networks for stock market prediction on the JSE[J]. Communications in Computer and Information Science, 2022, 1551: 95-110.
[3] WU S, SUN F, ZHANG W, et al. Graph neural networks in recommender systems: a survey[J]. ACM Computing Surveys, 2022, 55(5): 1-37
[4] DOU Y, LIU Z, SUN L. Enhancing graph neural network-based fraud detectors against camouflaged fraudsters[C] //Proceedings of the 29th ACM International Conference on Information & Knowledge Management(CIKM'20). New York, USA: Association for Computing Machinery, 2020: 315-324.
[5] LIU M, GAO H, JI S. Towards deeper graph neural networks[C] //Proceedings of the 26th ACM SIGKDDInternational Conference on Knowledge Discovery & Data Mining. New York, USA: ACM, 2020: 338-348.
[6] ZHANG M, CUI Z, NEUMANN M, et al. An end-to-end deep learning architecture for graph classification[C] //Proceedings of the Thirty-Second AAAI Conference on Artificial Intelligence and Thirtieth Innovative Applications of Artificial Intelligence Conference and Eighth AAAI Symposium on Educational Advances in Artificial Intelligence. New Orleans, USA: AAAI Press, 2018: 4438-4445.
[7] ZHANG M, CHEN Y. Link prediction based on graph neural networks[C] //Proceedings of the 32nd International Conference on Neural Information Processing Systems(NIPS'18). Red Hook, USA: Curran Associates Inc, 2018: 5171-5181.
[8] ZHANG Z, CUI P, ZHU W. Deep learning on graphs:a survey[J]. IEEE Transactions on Knowledge and Data Engineering, 2020, 34(1): 249-270.
[9] WU Z, PAN S, CHEN F, et al. A comprehensive survey on graph neural networks[J]. IEEE Transactions on Neural Networks and Learning Systems, 2020, 32(1): 4-24.
[10] YUAN H, YU H, GUI S, et al. Explainability in graph neural networks:a taxonomic survey[J]. IEEE Transactions on Pattern Analysis and Machine Intelligence, 2023, 45(5): 5782-5799.
[11] WU L, CUI P, PEI J, et al. Graph neural networks: foundations, frontiers, and applications[M]. Singapore: Springer, 2022.
[12] HUANG Z, KOSAN M, MEDYA S, et al. Global counterfactual explainer for graph neural networks[C] //Proceedings of the Sixteenth ACM International Conference on Web Search and Data Mining(WSDM'23). New York, USA: Association for Computing Machinery, 2023: 141-149.
[13] POPE P E, KOLOURI S, ROSTAMI M, et al.Explainability methods for graph convolutional neural networks[C] //2019 IEEE/CVF Conference on Computer Vision and Pattern Recognition(CVPR). Long Beach, USA: IEEE, 2019: 10764-10773.
[14] ARRIETA A B, DÍAZ-RODRÍGUEZ N, SER J D, et al. Explainable artificial intelligence(XAI): concepts, taxonomies, opportunities and challenges toward responsible AI[J]. Information Fusion, 2020, 58: 82-115.
[15] HUANG Q, YAMADA M, TIAN Y, et al. GraphLIME: local interpretable model explanations for graph neural networks[J]. IEEE Transactions on Knowledge & Data Engineering, 2022, 35(7): 6968-6972.
[16] RIBEIROM T, SINGH S, GUESTRIN C. "Why should I trust you?" explaining the predictions of any classifier[C] //Proceedings of the 22nd ACM SIGKDD International Conference on Knowledge Discovery and Data Mining(KDD'16). New York, USA: Association for Computing Machinery, 2016: 1135-1144.
[17] YING R, BOURGEOIS D, YOU J, et al.GNNExplainer: generating explanations for graph neural networks[C] //Proceedings of the 33rd International Conference on Neural Information Processing Systems. Red Hook, USA: Curran Associates Inc, 2019: 9240-9251.
[18] LUO D, CHENG W, XU D, et al. Parameterized explainer for graph neural network[C] //Proceedings of the 34th International Conference on Neural Information Processing Systems(NIPS'20). Red Hook, USA: Curran Associates Inc, 2020: 19620-19631.
[19] DUVAL A, MALLIAROS F D. GraphSVX: Shapley value explanations for graph neural networks[C] //Machine Learning and Knowledge Discovery in Databases. Cham, Switzerland: Springer, 2021: 302-318.
[20] YUAN H, YU H, WANG J, et al. On explainability of graph neural networks via subgraph explorations[C] // Proceedings of the 38th International Conference on Machine Learning. New York, USA: PMLR, 2021: 12241-12252.
[21] MUENYE I D, SUN Y. A survey of ensemble learning:concepts, algorithms, applications, and prospects[J]. IEEE Access, 2022, 10: 99129-99149.
[22] 周志华. 集成学习:基础与算法[M]. 北京: 电子工业出版社, 2020.
[23] 胡毅, 瞿博阳, 梁静, 等. 进化集成学习算法综述[J].智能科学与技术学报, 2021, 3(1): 18-35. HU Yi, QU Boyang, LIANG Jing, et al. A survey on evolutionary ensemble learning algorithm[J]. Chinese Journal of Intelligent Science and Technology, 2021, 3(1): 18-35.
[24] 姚旭, 王晓丹, 张玉玺, 等. 基于随机子空间和 AdaBoost 的自适应集成方法[J]. 电子学报, 2013, 41(4): 810-814. YAO Xu, WANG Xiaodan, ZHANG Yuxi, et al. A self-adaption ensemble algorithm based on random subspace and AdaBoost[J]. Acta Electronica Sinica, 2013, 41(4): 810-814.
[25] KATOCH S, CHAUHAN S S, KUMAR V. A review on genetic algorithm: past, present, and future[J]. Multimedia Tools and Applications, 2021, 80: 8091-8126.
[26] DHAL K G, RAY S, DAS A, et al. A survey on nature-inspired optimization algorithms and their application in image enhancement domain[J]. Archives of Computational Methods in Engineering, 2019, 26: 1607-1638.
[27] DAI E, WANG S. Towards self-explainable graph neural network[C] //Proceedings of the 30th ACM International Conference on Information & Knowledge Management(CIKM'21). New York, USA: Association for Computing Machinery, 2021: 302-311.
[28] WU Z, RAMSUNDAR B, FEINBERG E N, et al. MoleculeNet: a benchmark for molecular machine learning[J]. Chemical Science, 2018, 9(2): 513-530.
[1] 聂秀山,巩蕊,董飞,郭杰,马玉玲. 短视频场景分类方法综述[J]. 山东大学学报 (工学版), 2024, 54(3): 1-11.
[2] 宋辉,张轶哲,张功萱,孟元. 基于类权重和最小化预测熵的测试时集成方法[J]. 山东大学学报 (工学版), 2024, 54(3): 36-43.
[3] 刘新,刘冬兰,付婷,王勇,常英贤,姚洪磊,罗昕,王睿,张昊. 基于联邦学习的时间序列预测算法[J]. 山东大学学报 (工学版), 2024, 54(3): 55-63.
[4] 岳仁峰,张嘉琦,刘勇,范学忠,李琮琮,孔令鑫. 基于颜色和纹理特征的立体车库锈蚀检测技术[J]. 山东大学学报 (工学版), 2024, 54(3): 64-69.
[5] 索大翔,李波. 基于Gromov-Wasserstein最优传输的输电线路小目标检测方法[J]. 山东大学学报 (工学版), 2024, 54(3): 22-29.
[6] 高泽文,王建,魏本征. 基于混合偏移轴向自注意力机制的脑胶质瘤分割算法[J]. 山东大学学报 (工学版), 2024, 54(2): 80-89.
[7] 李璐,张志军,范钰敏,王星,袁卫华. 面向冷启动用户的元学习与图转移学习序列推荐[J]. 山东大学学报 (工学版), 2024, 54(2): 69-79.
[8] 赵涛,张宁,王小超,马川义,田源,张圣涛,杨梓梁. 基于图神经网络轨迹预测的合流区交通冲突预测方法[J]. 山东大学学报 (工学版), 2024, 54(2): 36-46.
[9] 陈成,董永权,贾瑞,刘源. 基于交互序列特征相关性的可解释知识追踪[J]. 山东大学学报 (工学版), 2024, 54(1): 100-108.
[10] 李鸿钊,张庆松,刘人太,陈新,辛勤,石乐乐. 浅埋地铁车站施工期地表变形风险预警[J]. 山东大学学报 (工学版), 2023, 53(6): 82-91.
[11] 李家春,李博文,常建波. 一种高效且轻量的RGB单帧人脸反欺诈模型[J]. 山东大学学报 (工学版), 2023, 53(6): 1-7.
[12] 王旭晴,魏伟波,杨光宇,宋金涛,吕婷,潘振宽. 基于算法展开的图像盲去模糊深度学习网络[J]. 山东大学学报 (工学版), 2023, 53(6): 35-46.
[13] 卞小曼,王小琴,蓝如师,刘振丙,罗笑南. 基于相似性保持和判别性分析的快速视频哈希算法[J]. 山东大学学报 (工学版), 2023, 53(6): 63-69.
[14] 王碧瑶,韩毅,崔航滨,刘毅超,任铭然,高维勇,陈姝廷,刘嘉巍,崔洋. 基于图像的道路语义分割检测方法[J]. 山东大学学报 (工学版), 2023, 53(5): 37-47.
[15] 陈雷,赵耀帅,林彦,郭晟楠,万怀宇,林友芳. 交通流量预测的时间异质性图注意力网络[J]. 山东大学学报 (工学版), 2023, 53(5): 29-36.
Viewed
Full text


Abstract

Cited

  Shared   
  Discussed   
No Suggested Reading articles found!