DescriptionArtificial Intelligence solutions are now being applied across a significant number of areas as disparate as healthcare and transportation, due to high profile successes such as AlphaGo, Waymo and virtual assistants. However, adoption rates are still relatively low in some areas and this includes supply chain related applications. While in the past this could be mostly attributed to technology immaturity, the increased robustness of AI solutions has meant that there are other contributing factors. One such factor is arguably lack of trust and transparency due to some AI models being opaque or “black box”. There are two main ways to address this: through enhancing the interpretability of machine learning based models or through hybrid solutions that include inherently explainable components such as multi-criteria decision analysis and knowledge-based AI using ontologies and knowledge graphs. This talk focuses on research along these lines at the University of Huddersfield including work on interpretable prediction of supply chain risk, hybrid solutions to supplier selection and modelling complex adaptive systems such as supply chains using agents.
|6 May 2021
|University of Cambridge, United Kingdom
|Degree of Recognition
Documents & Links
A review of explainable artificial intelligence in supply chain management using neurosymbolic approaches
Research output: Contribution to journal › Review article › peer-review