Argument Schemes and a Dialogue System for Explainable Planning

Quratul-ain Mahesar, Simon Parsons

Research output: Contribution to journalArticlepeer-review


Artificial Intelligence (AI) is being increasingly deployed in practical applications. However, there is a major concern whether AI systems will be trusted by humans. To establish trust in AI systems, there is a need for users to understand the reasoning behind their solutions. Therefore, systems should be able to explain and justify their output. Explainable AI Planning is a field that involves explaining the outputs, i.e., solution plans produced by AI planning systems to a user. The main goal of a plan explanation is to help humans understand reasoning behind the plans that are produced by the planners. In this article, we propose an argument scheme-based approach to provide explanations in the domain of AI planning. We present novel argument schemes to create arguments that explain a plan and its key elements and a set of critical questions that allow interaction between the arguments and enable the user to obtain further information regarding the key elements of the plan. Furthermore, we present a novel dialogue system using the argument schemes and critical questions for providing interactive dialectical explanations.

Original languageEnglish
Article number89
Number of pages25
JournalACM Transactions on Intelligent Systems and Technology
Issue number5
Early online date30 Sep 2023
Publication statusPublished - 1 Oct 2023

Cite this