We are working on the intersection of machine learning and task planning with the goal of developing long-life autonomous systems that are robust and trusted. As these systems become more complex, they become more opaque to non-expert users. As a result, current applications of autonomous systems in challenging environments, or interacting with untrained, non-expert users are limited to operating within environments that have been adapted to support them, very small time windows of operation, or the behaviour of the system is tightly confined. Our vision is to develop novel approaches to intelligent control that are capable of reacting robustly and safely in dynamic and challenging environments, explaining their behaviour, and working within mixed teams of humans and machines.
Our research students are part of the Strathclyde University's Doctoral Training School on Explainable AI for Decision Support. This SCDT focus on the role of human expertise in Explainable AI decision-support systems for industrial applications. Specifically how human expertise can be captured, represented, retained and utilised to prove qualified, robust and evidence-based decisions in such area as planning, scheduling, optimisation and through life asset management.
Our research projects focus on the deliberation that takes place in autonomous systems designed to act in the real world, in challenging environments, and in collaboration with humans. This involves making decisions through dialogue with human supervision, and acting to autonomously carry out the decided behaviours in parallel with human colleagues, in a dynamic and uncertain environment.
Current research students are investigating the links between robust optimisation and flexible plan execution, leveraging historical data to determine risk-bounded behaviours; developing new approaches to Explainable AI for satellite mission planning and scheduling; and designing new architecture for goal reasoning on-board persistently autonomous robot teams.
We are working with a number of project partners, including ESA/ESOC; AGS Airports; ANRA technologies; and REDOne technologies.
- Robustness Envelopes for Temporal Plans. Cashmore, M.; Cimatti, A.; Magazzeni, D.; Micheli, A.; and Zehtabi, P. In Proceedings of the AAAI Conference on Artificial Intelligence, volume 33, pages 7538–7545, 2019.
- Temporal planning while the clock ticks. Cashmore, M.; Coles, A.; Cserna, B.; Karpas, E.; Magazzeni, D.; and Ruml, W. 2018.
- Verbalization of Plan Execution for Human-Robot Teaming. Moon, J.; Magazzeni, D.; Cashmore, M.; Lee, B.; Moon, Y.; and Roh, S. In IROS Workshop on Semantic Descriptor, Semantic Modeling and Mapping for Humanlike Perception and Navigation of Mobile Robots toward Large Scale Long-Term Autonomy, 2019. bibtex
- A New Approach to Plan-Space Explanation: Analyzing Plan-Property Dependencies in Oversubscription Planning. Eifler, R.; Cashmore, M.; Hoffmann, J.; Magazzeni, D.; and Steinmetz, M. In AAAI, pages 9818–9826, 2020. bibtex
- Intent-driven strategic tactical planning for autonomous site inspection using cooperative drones. Buksz, D.; Mujumdar, A.; Orlic, M.; Mohalik, S.; Daoutis, M.; Badrinath, R.; Magazzeni, D.; Cashmore, M.; and Feljan, A. V. In 2020 IEEE/RSJ International Conference on Intelligent Robots and Systems, 2020.
- Using Machine Learning for Decreasing State Uncertainty in Planning. Krivic, S.; Cashmore, M.; Magazzeni, D.; Szedmak, S.; and Piater, J. Journal of Artificial Intelligence Research, 69: 765–806. 2020.