TY - JOUR
T1 - An explainable intelligence fault diagnosis framework for rotating machinery
AU - Yang, Daoguang
AU - Karimi, Hamid Reza
AU - Gelman, Len
N1 - Funding Information:
This research is supported in part by the scholarship from the China Scholarship Council (CSC), China under Grant CSC N201906050158, in part by the Italian Ministry of Education, University and Research, Italy for the support provided through the Project “Department of Excellence LIS4.0 – Lightweight and Smart Structures for Industry 4.0” and in part by the Horizon Marie Skłodowska-Curie Actions program (101073037).
Publisher Copyright:
© 2023 Elsevier B.V.
PY - 2023/7/7
Y1 - 2023/7/7
N2 - Convolutional neural networks (CNNs) are considered black boxes due to their robust nonlinear fitting capability. In the context of fault diagnosis for rotating machinery, it may happen that a standard CNN makes a final decision based on a mixture of significant and insignificant features, therefore, it is required to establish a trustworthy intelligence fault diagnosis model with the controllable feature learning capability to identify fault types. In this paper, an explainable intelligence fault diagnosis framework is proposed to recognize the fault signals, using data obtained through short-time Fourier transformation, which is easily modified from a standard CNN. The post hoc explanation method is used to visualize the features the model learned from a signal. The experimental results show that the proposed explainable intelligence fault diagnosis framework provides 100% testing accuracy and visualizations, the Average Drop and the Average Increase from a classification activation mappings method demonstrate the interpretability of the proposed framework.
AB - Convolutional neural networks (CNNs) are considered black boxes due to their robust nonlinear fitting capability. In the context of fault diagnosis for rotating machinery, it may happen that a standard CNN makes a final decision based on a mixture of significant and insignificant features, therefore, it is required to establish a trustworthy intelligence fault diagnosis model with the controllable feature learning capability to identify fault types. In this paper, an explainable intelligence fault diagnosis framework is proposed to recognize the fault signals, using data obtained through short-time Fourier transformation, which is easily modified from a standard CNN. The post hoc explanation method is used to visualize the features the model learned from a signal. The experimental results show that the proposed explainable intelligence fault diagnosis framework provides 100% testing accuracy and visualizations, the Average Drop and the Average Increase from a classification activation mappings method demonstrate the interpretability of the proposed framework.
KW - Classification activation mappings
KW - Convolutional neural networks
KW - Explainable artificial intelligence
KW - Intelligent fault diagnosis
KW - Interpretability
KW - Rotating machinery
UR - http://www.scopus.com/inward/record.url?scp=85158043387&partnerID=8YFLogxK
U2 - 10.1016/j.neucom.2023.126257
DO - 10.1016/j.neucom.2023.126257
M3 - Article
AN - SCOPUS:85158043387
VL - 541
JO - Neurocomputing
JF - Neurocomputing
SN - 0925-2312
M1 - 126257
ER -