- Opens: Wednesday 23 March 2022
- Number of places: 1
- Duration: 3.5 years
OverviewThe research will develop methods for automatic uncertainty characterization and propagation based on probabilistic computing and virtual experts. Probabilistic computing is a promising approach from the AI community for addressing the uncertainties inherent in so-called natural data (or uncertain numbers).
- Masters level degree (MEng, MPhys, MSc) or equivalent (minimum 2:1)
- Knowledge of coding in any language
- Desire to work collegiately, be involved in outreach
Digital twins are virtual life replica of components and systems and they are becoming increasingly popular tools to predict, monitor and control the behaviour of complex systems. Digital twins and machine learning are often used to reduce the necessary requirements and experimental analysis during the design phase, to test the behaviour of the system under critical situations, and to create scenarios that are difficult or impractical to recreate in practice. A realistic digital twin is constituted by dynamic models, constantly updated with different streams of data and information and able to predict (simulate) the performance of the system with the required level of confidence.
Digital twin needs data to be constructed. But data are usually not reliable because data can be imprecise, incomplete, truncated, missing, censored, corrupted, just to mention a few problems. It is therefore necessary to complement our digital model with physics-based rules and to explicitly account for the uncertainty. Models are only an approximation of reality and their accuracy need to be estimated and considered. Propagating the uncertainty through models is challenging for a non-expert in stochastic analysis probabilistic models. Handling large amounts of data is cumbersome, slow, and expensive.
Uncertainty analysis is too important to leave inexperienced people to do it themselves. Our calculation tools must do this automatically. Uncertainty analysis can be utilized to give the benefit of the doubt to people in uncertain cases where safety or fairness is a salient issue. Probabilistic methods are some of the most powerful methods available in computational science but are expensive and require certain expertise to perform correctly. Just as automatic differentiation has enabled machine learning, automatic uncertainty would enable cheap and speedy probabilistic calculations.
Aim & objectives
The main aim of the proposed research is to develop robust probabilistic computing tools to support the development of sustainable infrastructure and systems.
The research will develop methods for automatic uncertainty characterization and propagation based on probabilistic computing and virtual experts. Probabilistic computing is a promising approach from the AI community for addressing the uncertainties inherent in so-called natural data (or uncertain numbers). The most challenging is the arithmetic using uncertain numbers since the operations themselves introduce dependence, such that after several operations variables are correlated even if beginning as independent (every time a binary operation occurs the computer needs to know how the operand to the left is correlated to the one on the right side of the operator). Solving this problem requires variable dependencies to be calculated, stored, and tracked. It is yet unknown how to achieve this mathematically, but algorithmic strategies to the problem are currently being developed, with bounds obtained with certain numerical guarantees.
Probabilistic programming and automatic uncertainty are high-level software tools that allow developers to define models and “solve” them automatically in a probabilistic sense. Rather than adjusting decimal numbers as it is done in (deep) neural networks, probabilistic computing modifies code in its simulation to try to match its predictions with those of a human. The models are therefore more transparent, opening a dialogue between man and machine allowing developers to quickly update the code with new rules of thumb.
The main ambition of this proposal is computing with uncertain numbers. From an engineering perspective, this is a game-changer tool. It allows an engineer to easily add assumptions or physical bases rules to a model, then tests them on the data. In turn, it also provides a diagnostic tool based on the violations of predefined rules. The project also supports the cost-effective design and management of infrastructure based on the condition monitoring data, operational experience and expert opinion integrated into an automatised and verified computational decision tool.
The developed approach will be tested in a manufacturing processes provided by the Digital Factory of the National Manufacturing Institution Scotland reduce the degree (and expense) of post-processing inspection for validating component properties.
The supervisory team is formed by members of the Centre for Intelligent Infrastructure, Civil and Environmental Engineering Department, Mechanical and Aerospace Engineering, the digital factory from NMIS and collaboration with world providing the necessary expertise for ground-breaking research to address emerging societal needs and challenges.
The student will join the multidisciplinary and international research group of Prof Patelli and the research centre of the Digital Factory of the National Manufacturing Institute Scotland. The student will receive training on technical skills as well as on transferable skills such as communication, presentation, coding and programming and Programming ethics in AI technologies. The student will also participate in hackathon events and in the organisation of conferences and international workshops and training activities. The student will also be supported in the development and implementation of novel computational technologies by the members and collaborators of the Cossan Software.
The involvement in the EPSRC funded research and secondment and visit to the international partners will also provide a very important student experience.