Simulating low-level cognitive behaviour, such as reaction to stimuli, has been a major focus of research and development in the autonomous systems (AS) community for many years. Automated assessment of sensor data, and reactive selection of actions in the form of condition-action pairs, is well developed in robotic and control application areas. In contrast, a characteristic of more intelligent behaviour is the ability to reason with self-knowledge: an AS knows about the actions it can perform, the resources it has, the goals it has to achieve, the current state and environment it finds itself in; and it has the ability to reason with all this knowledge in order to synthesise, and carry out, plans to achieve its desired goals. So for example an unmanned vehicle on the Mars surface might be requested to collect a rock sample at some position, or a spacecraft might be required to take a photograph of some star constellation. These tasks require an AS to generate or be given detailed plans to achieve them. Enabling applications involving AS to have the general ability to generate reliable plans in this manner is a great challenge, because of the difficulty of creating plans fast enough in real-time situations, and the problems in representing and keeping up to date the AS's domain knowledge.
Recently researchers who are working on automatically creating plans (automated planning) have made many breakthroughs, so that now such automated planners are capable of reasoning very efficiently and accurately with detailed representations of knowledge. This has resulted in automated planning software being used within a wide range of applications including fire fighting, elevator control, emergency landing, aircraft repair scheduling, workflow generation, narrative generation, and battery load balancing. In Space applications, scientists at NASA have been developing systems with such technology for the control of autonomous vehicles, and have deployed systems which can plan activities for spacecraft, schedule observation movements for the Hubble Telescope, and control underwater vehicles.
While the development of automated planning has been encouraging, a major problem remains in all these applications, which limits their adaptability, and makes them difficult to maintain and validate: much of the AS's high level self-knowledge, that is knowledge of such things as actions, resources, goals, objects, states and environment, has to be programmed or encoded into the system before its operation (this encoded knowledge is often called a `domain model'). Experience has shown that this encoding involves a great deal of expert time and effort. It also means that if the AS's capabilities change, for example if the preconditions or effects of an action change, then new knowledge describing this must be re-entered into the system by human experts.
This research project seeks to lead the way to overcoming this challenge by enabling the AS to learn and adapt its own domain model. It seeks to discover methods for an AS to acquire knowledge initially, and to maintain and evolve that knowledge via feedback sensing after executing actions. While methods for empowering AS to learn basic reactions, or learn how to classify data, are well established, methods for getting an AS to learn and adapt knowledge of structures such as actions, is not so well developed. The proposers will use their recent research results in this area and research and develop prototype AS which can learn and adapt their domain models. They will demonstrate and evaluate their research using virtual worlds which model real applications.