An explainable intelligence fault diagnosis framework for rotating machinery

Daoguang Yang, Hamid Reza Karimi, Len Gelman

Research output: Contribution to journalArticlepeer-review

21 Citations (Scopus)

Abstract

Convolutional neural networks (CNNs) are considered black boxes due to their robust nonlinear fitting capability. In the context of fault diagnosis for rotating machinery, it may happen that a standard CNN makes a final decision based on a mixture of significant and insignificant features, therefore, it is required to establish a trustworthy intelligence fault diagnosis model with the controllable feature learning capability to identify fault types. In this paper, an explainable intelligence fault diagnosis framework is proposed to recognize the fault signals, using data obtained through short-time Fourier transformation, which is easily modified from a standard CNN. The post hoc explanation method is used to visualize the features the model learned from a signal. The experimental results show that the proposed explainable intelligence fault diagnosis framework provides 100% testing accuracy and visualizations, the Average Drop and the Average Increase from a classification activation mappings method demonstrate the interpretability of the proposed framework.

Original languageEnglish
Article number126257
Number of pages13
JournalNeurocomputing
Volume541
Early online date6 May 2023
DOIs
Publication statusPublished - 7 Jul 2023

Fingerprint

Dive into the research topics of 'An explainable intelligence fault diagnosis framework for rotating machinery'. Together they form a unique fingerprint.

Cite this