Performance Robustness of AI Planners in the 2014 International Planning Competition

Andrea Bocchese, Chris Fawcett, Mauro Vallati, Alfonso E. Gerevini, Holger H. Hoos

Research output: Contribution to journalArticlepeer-review

4 Citations (Scopus)


. Solver competitions have been used in many areas of AI to assess the current state of the art and guide future research and development. AI planning is no exception, and the International Planning Competition (IPC) has been frequently run for nearly two decades. Due to the organisational and computational burden involved in running these competitions, solvers are generally compared using a single homogeneous hardware and software environment for all competitors. To what extent does the specific choice of hardware and software environment have an effect on solver performance, and is that effect distributed equally across the competing solvers?
In this work, we use the competing planners and benchmark instance sets from the 2014 IPC to investigate these two questions. We recreate the 2014 IPC Optimal and Agile tracks on two distinct hardware environments and eight distinct software environments. We show that solver performance varies significantly based on the hardware and software environment, and that this variation is not equal for all planners. Furthermore, the observed variation is sufficient to change the competition rankings, including the top-ranked planners for some tracks.
Original languageEnglish
Pages (from-to)445-463
Number of pages19
JournalAI Communications
Issue number6
Early online date15 Oct 2018
Publication statusPublished - 21 Dec 2018


Dive into the research topics of 'Performance Robustness of AI Planners in the 2014 International Planning Competition'. Together they form a unique fingerprint.

Cite this