Which Explanation Should be Selected: A Method Agnostic Model Class Reliance Explanation for Model and Explanation Multiplicity

Research output: Contribution to journalArticlepeer-review

Abstract

Feature importance techniques offer valuable insights into machine learning (ML) models by conducting quantitative assessments of the individual contributions of variables to the model’s predictive outcomes. This quantification differs across various explanation methods and multiple almost equally accurate models (Rashomon models), creating explanation and model multiplicities. This resulted in a novel framework called method agnostic model class reliance range (MAMCR) for identifying a unified explanation across methods for multiple models. This consensus explanation provides each feature’s importance range for a class of models. Using state-of-the-art feature importance methods, experiments on popular machine learning datasets are conducted with a ε- threshold value of 0.1. The dataset-specific Rashomon set with 200 models, and the prediction accuracy of concerned reference models (m) have produced encouraging results in obtaining a consensus model reliance explanation that is consistent across multiple methods. The experiment results ensure whether the prediction accuracy level of models has an impact on the importance range estimation of features. Also, the order of features suggested by MAMCR leads to better performance of models consistently in all the experimented datasets, than the state-of-the-art methods.

Original languageEnglish
Article number503
Number of pages20
JournalSN Computer Science
Volume5
Issue number5
Early online date27 Apr 2024
DOIs
Publication statusE-pub ahead of print - 27 Apr 2024

Cite this