山东大学学报 (工学版) ›› 2024, Vol. 54 ›› Issue (4): 1-12.doi: 10.6040/j.issn.1672-3961.0.2023.273
• 机器学习与数据挖掘 • 下一篇
常新功,苏敏惠*,周志刚
CHANG Xingong, SU Minhui*, ZHOU Zhigang
摘要: 针对图神经网络模型普遍缺乏可解释性问题,提出一种基于进化集成的图神经网络解释方法,为模型预测提供质量更高的解释。将当前主流图神经网络解释方法GNNExplainer和PGExplainer作为初级解释器,分别为模型预测提供初级解释;基于初级解释结果设计遗传算子,采用改进遗传算法集成两种初级解释结果得到最终解释。在4个真实数据集和4个合成数据集上进行广泛试验,从定性和定量两个角度对试验结果进行评估。试验结果表明,相较于同类算法,提出算法的准确度平均提高17%,忠实度平均提高20%。与传统集成学习融合策略相比,改进遗传算法作为集成器对解释方法的优化效果更为显著,所有指标整体平均提高29%。采用进化集成策略能够显著提高图神经网络解释算法的性能。
中图分类号:
| [1] MA Y, TANG J. Deep learning on graphs[M]. Cambridge, UK: Cambridge University Press, 2021. [2] PILLAY K, MOODLEY D. Exploring graph neural networks for stock market prediction on the JSE[J]. Communications in Computer and Information Science, 2022, 1551: 95-110. [3] WU S, SUN F, ZHANG W, et al. Graph neural networks in recommender systems: a survey[J]. ACM Computing Surveys, 2022, 55(5): 1-37 [4] DOU Y, LIU Z, SUN L. Enhancing graph neural network-based fraud detectors against camouflaged fraudsters[C] //Proceedings of the 29th ACM International Conference on Information & Knowledge Management(CIKM'20). New York, USA: Association for Computing Machinery, 2020: 315-324. [5] LIU M, GAO H, JI S. Towards deeper graph neural networks[C] //Proceedings of the 26th ACM SIGKDDInternational Conference on Knowledge Discovery & Data Mining. New York, USA: ACM, 2020: 338-348. [6] ZHANG M, CUI Z, NEUMANN M, et al. An end-to-end deep learning architecture for graph classification[C] //Proceedings of the Thirty-Second AAAI Conference on Artificial Intelligence and Thirtieth Innovative Applications of Artificial Intelligence Conference and Eighth AAAI Symposium on Educational Advances in Artificial Intelligence. New Orleans, USA: AAAI Press, 2018: 4438-4445. [7] ZHANG M, CHEN Y. Link prediction based on graph neural networks[C] //Proceedings of the 32nd International Conference on Neural Information Processing Systems(NIPS'18). Red Hook, USA: Curran Associates Inc, 2018: 5171-5181. [8] ZHANG Z, CUI P, ZHU W. Deep learning on graphs:a survey[J]. IEEE Transactions on Knowledge and Data Engineering, 2020, 34(1): 249-270. [9] WU Z, PAN S, CHEN F, et al. A comprehensive survey on graph neural networks[J]. IEEE Transactions on Neural Networks and Learning Systems, 2020, 32(1): 4-24. [10] YUAN H, YU H, GUI S, et al. Explainability in graph neural networks:a taxonomic survey[J]. IEEE Transactions on Pattern Analysis and Machine Intelligence, 2023, 45(5): 5782-5799. [11] WU L, CUI P, PEI J, et al. Graph neural networks: foundations, frontiers, and applications[M]. Singapore: Springer, 2022. [12] HUANG Z, KOSAN M, MEDYA S, et al. Global counterfactual explainer for graph neural networks[C] //Proceedings of the Sixteenth ACM International Conference on Web Search and Data Mining(WSDM'23). New York, USA: Association for Computing Machinery, 2023: 141-149. [13] POPE P E, KOLOURI S, ROSTAMI M, et al.Explainability methods for graph convolutional neural networks[C] //2019 IEEE/CVF Conference on Computer Vision and Pattern Recognition(CVPR). Long Beach, USA: IEEE, 2019: 10764-10773. [14] ARRIETA A B, DÍAZ-RODRÍGUEZ N, SER J D, et al. Explainable artificial intelligence(XAI): concepts, taxonomies, opportunities and challenges toward responsible AI[J]. Information Fusion, 2020, 58: 82-115. [15] HUANG Q, YAMADA M, TIAN Y, et al. GraphLIME: local interpretable model explanations for graph neural networks[J]. IEEE Transactions on Knowledge & Data Engineering, 2022, 35(7): 6968-6972. [16] RIBEIROM T, SINGH S, GUESTRIN C. "Why should I trust you?" explaining the predictions of any classifier[C] //Proceedings of the 22nd ACM SIGKDD International Conference on Knowledge Discovery and Data Mining(KDD'16). New York, USA: Association for Computing Machinery, 2016: 1135-1144. [17] YING R, BOURGEOIS D, YOU J, et al.GNNExplainer: generating explanations for graph neural networks[C] //Proceedings of the 33rd International Conference on Neural Information Processing Systems. Red Hook, USA: Curran Associates Inc, 2019: 9240-9251. [18] LUO D, CHENG W, XU D, et al. Parameterized explainer for graph neural network[C] //Proceedings of the 34th International Conference on Neural Information Processing Systems(NIPS'20). Red Hook, USA: Curran Associates Inc, 2020: 19620-19631. [19] DUVAL A, MALLIAROS F D. GraphSVX: Shapley value explanations for graph neural networks[C] //Machine Learning and Knowledge Discovery in Databases. Cham, Switzerland: Springer, 2021: 302-318. [20] YUAN H, YU H, WANG J, et al. On explainability of graph neural networks via subgraph explorations[C] // Proceedings of the 38th International Conference on Machine Learning. New York, USA: PMLR, 2021: 12241-12252. [21] MUENYE I D, SUN Y. A survey of ensemble learning:concepts, algorithms, applications, and prospects[J]. IEEE Access, 2022, 10: 99129-99149. [22] 周志华. 集成学习:基础与算法[M]. 北京: 电子工业出版社, 2020. [23] 胡毅, 瞿博阳, 梁静, 等. 进化集成学习算法综述[J].智能科学与技术学报, 2021, 3(1): 18-35. HU Yi, QU Boyang, LIANG Jing, et al. A survey on evolutionary ensemble learning algorithm[J]. Chinese Journal of Intelligent Science and Technology, 2021, 3(1): 18-35. [24] 姚旭, 王晓丹, 张玉玺, 等. 基于随机子空间和 AdaBoost 的自适应集成方法[J]. 电子学报, 2013, 41(4): 810-814. YAO Xu, WANG Xiaodan, ZHANG Yuxi, et al. A self-adaption ensemble algorithm based on random subspace and AdaBoost[J]. Acta Electronica Sinica, 2013, 41(4): 810-814. [25] KATOCH S, CHAUHAN S S, KUMAR V. A review on genetic algorithm: past, present, and future[J]. Multimedia Tools and Applications, 2021, 80: 8091-8126. [26] DHAL K G, RAY S, DAS A, et al. A survey on nature-inspired optimization algorithms and their application in image enhancement domain[J]. Archives of Computational Methods in Engineering, 2019, 26: 1607-1638. [27] DAI E, WANG S. Towards self-explainable graph neural network[C] //Proceedings of the 30th ACM International Conference on Information & Knowledge Management(CIKM'21). New York, USA: Association for Computing Machinery, 2021: 302-311. [28] WU Z, RAMSUNDAR B, FEINBERG E N, et al. MoleculeNet: a benchmark for molecular machine learning[J]. Chemical Science, 2018, 9(2): 513-530. |
| [1] | 李常刚,李宝亮,曹永吉,王佳颖. 人工智能在电力系统潮流计算中的应用综述及展望[J]. 山东大学学报 (工学版), 2025, 55(5): 1-17. |
| [2] | 邓彬, 张宗包, 赵文猛, 罗新航, 吴秋伟. 基于云边协同和图神经网络的电动汽车充电站负荷预测方法[J]. 山东大学学报 (工学版), 2025, 55(5): 62-69. |
| [3] | 周群颖,隋家成,张继,王洪元. 基于自监督卷积和无参数注意力机制的工业品表面缺陷检测[J]. 山东大学学报 (工学版), 2025, 55(4): 40-47. |
| [4] | 薛冰冰,王勇,杨维浩,王川,于迪,王旭. 基于ETC收费数据的高速公路交通流数据修复及实时预测[J]. 山东大学学报 (工学版), 2025, 55(3): 58-71. |
| [5] | 董明书,陈俐企,马川义,张珠皓,孙仁娟,管延华,庄培芝. 沥青路面内部裂缝雷达图像智能判识算法研究[J]. 山东大学学报 (工学版), 2025, 55(3): 72-79. |
| [6] | 祝明,石承龙,吕潘,刘现荣,孙驰,陈建城,范宏运. 基于优化长短时记忆网络的深基坑变形预测方法及其工程应用[J]. 山东大学学报 (工学版), 2025, 55(3): 141-148. |
| [7] | 林振宇,邵蓥侠. 基于盖根堡多项式最佳平方近似的谱图网络[J]. 山东大学学报 (工学版), 2024, 54(5): 93-100. |
| [8] | 白琳,俱通,王浩,雷明珠,潘晓英. 面向不平衡数据的提升均衡集成学习算法[J]. 山东大学学报 (工学版), 2024, 54(4): 59-66. |
| [9] | 乔慧妍,段学龙,解驰皓,赵冬慧,马玉玲. 基于异常点检测的心理健康辅助诊断方法[J]. 山东大学学报 (工学版), 2024, 54(4): 76-85. |
| [10] | 索大翔,李波. 基于Gromov-Wasserstein最优传输的输电线路小目标检测方法[J]. 山东大学学报 (工学版), 2024, 54(3): 22-29. |
| [11] | 宋辉,张轶哲,张功萱,孟元. 基于类权重和最小化预测熵的测试时集成方法[J]. 山东大学学报 (工学版), 2024, 54(3): 36-43. |
| [12] | 刘新,刘冬兰,付婷,王勇,常英贤,姚洪磊,罗昕,王睿,张昊. 基于联邦学习的时间序列预测算法[J]. 山东大学学报 (工学版), 2024, 54(3): 55-63. |
| [13] | 岳仁峰,张嘉琦,刘勇,范学忠,李琮琮,孔令鑫. 基于颜色和纹理特征的立体车库锈蚀检测技术[J]. 山东大学学报 (工学版), 2024, 54(3): 64-69. |
| [14] | 聂秀山,巩蕊,董飞,郭杰,马玉玲. 短视频场景分类方法综述[J]. 山东大学学报 (工学版), 2024, 54(3): 1-11. |
| [15] | 赵涛,张宁,王小超,马川义,田源,张圣涛,杨梓梁. 基于图神经网络轨迹预测的合流区交通冲突预测方法[J]. 山东大学学报 (工学版), 2024, 54(2): 36-46. |
|