Performance Robustness of AI Planners in the 2014 International Planning Competition

Andrea Bocchese, Chris Fawcett, Mauro Vallati, Alfonso E. Gerevini, Holger H. Hoos

Research output: Contribution to journalArticle

Abstract

. Solver competitions have been used in many areas of AI to assess the current state of the art and guide future research and development. AI planning is no exception, and the International Planning Competition (IPC) has been frequently run for nearly two decades. Due to the organisational and computational burden involved in running these competitions, solvers are generally compared using a single homogeneous hardware and software environment for all competitors. To what extent does the specific choice of hardware and software environment have an effect on solver performance, and is that effect distributed equally across the competing solvers?
In this work, we use the competing planners and benchmark instance sets from the 2014 IPC to investigate these two questions. We recreate the 2014 IPC Optimal and Agile tracks on two distinct hardware environments and eight distinct software environments. We show that solver performance varies significantly based on the hardware and software environment, and that this variation is not equal for all planners. Furthermore, the observed variation is sufficient to change the competition rankings, including the top-ranked planners for some tracks.
LanguageEnglish
Pages445-463
Number of pages19
JournalAI Communications
Volume31
Issue number6
Early online date15 Oct 2018
DOIs
Publication statusPublished - 21 Dec 2018

Fingerprint

Planning
Hardware

Cite this

Bocchese, Andrea ; Fawcett, Chris ; Vallati, Mauro ; Gerevini, Alfonso E. ; Hoos, Holger H. / Performance Robustness of AI Planners in the 2014 International Planning Competition. In: AI Communications. 2018 ; Vol. 31, No. 6. pp. 445-463.
@article{8c453546d8084cc9b880600ecf3a1824,
title = "Performance Robustness of AI Planners in the 2014 International Planning Competition",
abstract = ". Solver competitions have been used in many areas of AI to assess the current state of the art and guide future research and development. AI planning is no exception, and the International Planning Competition (IPC) has been frequently run for nearly two decades. Due to the organisational and computational burden involved in running these competitions, solvers are generally compared using a single homogeneous hardware and software environment for all competitors. To what extent does the specific choice of hardware and software environment have an effect on solver performance, and is that effect distributed equally across the competing solvers?In this work, we use the competing planners and benchmark instance sets from the 2014 IPC to investigate these two questions. We recreate the 2014 IPC Optimal and Agile tracks on two distinct hardware environments and eight distinct software environments. We show that solver performance varies significantly based on the hardware and software environment, and that this variation is not equal for all planners. Furthermore, the observed variation is sufficient to change the competition rankings, including the top-ranked planners for some tracks.",
keywords = "Automated Planning, Domain-Independent Planners, International Planning Competition, Algorithm Performance Robustness",
author = "Andrea Bocchese and Chris Fawcett and Mauro Vallati and Gerevini, {Alfonso E.} and Hoos, {Holger H.}",
year = "2018",
month = "12",
day = "21",
doi = "10.3233/AIC-170537",
language = "English",
volume = "31",
pages = "445--463",
journal = "AI Communications",
issn = "0921-7126",
publisher = "IOS Press",
number = "6",

}

Performance Robustness of AI Planners in the 2014 International Planning Competition. / Bocchese, Andrea; Fawcett, Chris; Vallati, Mauro; Gerevini, Alfonso E.; Hoos, Holger H.

In: AI Communications, Vol. 31, No. 6, 21.12.2018, p. 445-463.

Research output: Contribution to journalArticle

TY - JOUR

T1 - Performance Robustness of AI Planners in the 2014 International Planning Competition

AU - Bocchese, Andrea

AU - Fawcett, Chris

AU - Vallati, Mauro

AU - Gerevini, Alfonso E.

AU - Hoos, Holger H.

PY - 2018/12/21

Y1 - 2018/12/21

N2 - . Solver competitions have been used in many areas of AI to assess the current state of the art and guide future research and development. AI planning is no exception, and the International Planning Competition (IPC) has been frequently run for nearly two decades. Due to the organisational and computational burden involved in running these competitions, solvers are generally compared using a single homogeneous hardware and software environment for all competitors. To what extent does the specific choice of hardware and software environment have an effect on solver performance, and is that effect distributed equally across the competing solvers?In this work, we use the competing planners and benchmark instance sets from the 2014 IPC to investigate these two questions. We recreate the 2014 IPC Optimal and Agile tracks on two distinct hardware environments and eight distinct software environments. We show that solver performance varies significantly based on the hardware and software environment, and that this variation is not equal for all planners. Furthermore, the observed variation is sufficient to change the competition rankings, including the top-ranked planners for some tracks.

AB - . Solver competitions have been used in many areas of AI to assess the current state of the art and guide future research and development. AI planning is no exception, and the International Planning Competition (IPC) has been frequently run for nearly two decades. Due to the organisational and computational burden involved in running these competitions, solvers are generally compared using a single homogeneous hardware and software environment for all competitors. To what extent does the specific choice of hardware and software environment have an effect on solver performance, and is that effect distributed equally across the competing solvers?In this work, we use the competing planners and benchmark instance sets from the 2014 IPC to investigate these two questions. We recreate the 2014 IPC Optimal and Agile tracks on two distinct hardware environments and eight distinct software environments. We show that solver performance varies significantly based on the hardware and software environment, and that this variation is not equal for all planners. Furthermore, the observed variation is sufficient to change the competition rankings, including the top-ranked planners for some tracks.

KW - Automated Planning

KW - Domain-Independent Planners

KW - International Planning Competition

KW - Algorithm Performance Robustness

UR - http://www.scopus.com/inward/record.url?scp=85056784758&partnerID=8YFLogxK

U2 - 10.3233/AIC-170537

DO - 10.3233/AIC-170537

M3 - Article

VL - 31

SP - 445

EP - 463

JO - AI Communications

T2 - AI Communications

JF - AI Communications

SN - 0921-7126

IS - 6

ER -