Convolutional neural networks (CNNs) are considered black boxes due to their robust nonlinear fitting capability. In the context of fault diagnosis for rotating machinery, it may happen that a standard CNN makes a final decision based on a mixture of significant and insignificant features, therefore, it is required to establish a trustworthy intelligence fault diagnosis model with the controllable feature learning capability to identify fault types. In this paper, an explainable intelligence fault diagnosis framework is proposed to recognize the fault signals, using data obtained through short-time Fourier transformation, which is easily modified from a standard CNN. The post hoc explanation method is used to visualize the features the model learned from a signal. The experimental results show that the proposed explainable intelligence fault diagnosis framework provides 100% testing accuracy and visualizations, the Average Drop and the Average Increase from a classification activation mappings method demonstrate the interpretability of the proposed framework.