Current PhD projects
- Learning Causality from data with generative models
- Compositional generative models for problem solving
- Human factors in AI for healthcare
- Using data analytics to optimise perioperative medicine patient care pathways
- Social transformation for wellbeing: Using big data to understand social patterns of health outcomes
- Incorporating expert judgment into machine learning models
- The development of novel outcomes from the objective measurement of free-living physical behaviour and their relationship to determinants of health
- Robust and Explainable Mission Planning and Scheduling
- Data Driven automated Scheduling under Correlated Uncertainty
- Robust Plan Execution for Autonomous Robots
Strathclyde Doctoral Training Centre
Human Centric AI in Healthcare (HAIH)
Health data are rising rapidly. The growing influence of AI has yielded great prospects but also raised important questions about the future role of AI in relation to humans, which are critical to healthcare applications where humans are supposed to make clinical decisions. Can we trust the AI decisions? What is the best way to assess AI outcomes? – these are the key questions waiting for answers. The lack of research in this area has led to problems. Hence, there is a significant demand in addressing this shortage by training researchers to carry out research in this new dimension.
We envision a new research direction in AI for health to foster a transformative human centric paradigm, in which an emerging collaborative relationship between computers and humans will be created based on the ever-increasing power of AI. The computers will no longer just be used as tools, but instead act actively as a co-worker to offer sound, clear and evidence-based solutions to support clinicians and patients in clinical decision making and enhance clinical outcomes.
HAIH is a new Strathclyde Centre for Doctoral Training (SCDT) established in Oct 2020. The research aims to help clinicians and patients understand and trust AI by making AI solutions explainable and transparent. We train doctoral researchers in human centric AI for healthcare through multi-disciplinary research to address key challenges in Trusted data, Trusted AI and Trust in human factors.
We recruit new PhD students. If you are interested in studying with us, please contact Prof Feng Dong (firstname.lastname@example.org)
Explainable AI & Industrial Decision Support (EXPLAIN)
Artificial Intelligence is a crucial element of future economic success. However, it is difficult to trust AI-controlled systems that are unclear about why they are doing what they are doing. In most scenarios, these systems must operate at the boundaries of their competence and require human collaboration or supervision.
The Strathclyde Centre for Doctoral Training in Explainable AI and Industrial Decision Support (EXPLAIN) was established in Oct 2020. The centre is a multidisciplinary group for doctoral training in AI decision-support systems. We focus on how human expertise can be captured, represented, retained, and utilised to provide qualified, robust, and evidence-based decisions in areas such as planning, scheduling, optimisation, and through-life asset management. Advances in analytics, machine-learning, and automated systems will allow human intelligence to solve ever more complex problems in these areas.
If you are interested in studying for a PhD, please contact Dr Michael Cashmore (email@example.com)
Find out more about postgraduate study here at Strathclyde.