How to apply
For more information on studying with this Department and to complete an application to study go to our Research Opportunities page.
For more information on studying with this Department and to complete an application to study go to our Research Opportunities page.
This project combines mathematical modelling and long-term ecological data to close a crucial gap in our understanding of how the impacts of climate change at the base of marine food chains (plankton) translate into impacts on wild salmon and other fish, seabirds, and mammals at the top of those food chains.
Wild Atlantic salmon have played a culturally iconic and economically important role in Scotland and beyond for thousands of years. They are also ecological integrators, moving between freshwater and wide-ranging ocean habitats during their life cycles. In recent decades many salmon populations around the Atlantic Ocean have declined dramatically, in large part because of reduced survival during their time in the ocean. Like many ocean fish, wild salmon depend on very long food chains. Even a young salmon in its first summer at sea eats mainly smaller “forage fish” (small silvery fish like herring, sandeel, capelin, and so on), which prey upon millimetre-scale crustacean zooplankton, which in turn graze on single-celled phytoplankton. Climate-change impacts on salmon can arise from ecological effects at any point along this chain. However, almost all large-scale modelling efforts linking salmon to climate skip over the salmon’s actual prey (forage fish) and usually even the prey of their prey (zooplankton). In the North Atlantic, observations using the Continuous Plankton Recorder (CPR) provides a partial solution at the level of zooplankton: Tyldesley et al. (2024) developed a “Zooplankton Energy in the Diet of Forage Fish” index (ZE) which partly explains marine-survival patterns in salmon populations around the UK and Ireland—but the forage fish that link the zooplankton to the salmon are still missing from this picture. That is the gap which this studentship will address.
The student will build upon recent mathematical models by Olin et al. (submitted) and Ljungstrom et al. (2020) to create a model of “forage fish growth potential” (FFGP) — a systematic, generalisable (“trait-based”) method for estimating how a forage fish with a given body size and behaviour pattern would respond to a particular assemblage of zooplankton. The existing ZE index simply adds up calories; the FFGP index is intended to capture further considerations like “one large tasty crustacean may be worth much more than its weight in small ones, particularly in coastal waters where visibility is poor”. The student will use the new model to map FFGP by decade across the North Atlantic (1960s–present), and ground-truth it using regional forage-fish time series provided by our international project partners.
The final step will be to re-evaluate the relationship between declining marine survival in wild salmon and shifting migration patterns. More UK salmon have been a second year at sea in recent years, ranging as far as West Greenland; is this because they are “pushed” by poor feeding conditions in their home waters, or because they are “pulled” by relatively better feeding opportunities across the Atlantic? The question is fundamental to how our team brings ocean conditions into the Missing Salmon Alliance’s Atlantic Salmon Decision Support Tool, a online tool for fisheries and river managers developed by our team. The student will thus have a chance to be part of the translation of basic ecological research into practical guidance for management and conservation.
The position will be based at University of Strathclyde in Glasgow, Scotland and primarily supervised by Dr Neil Banas (Strathclyde U), Dr Colin Bull (Atlantic Salmon Trust / Stirling U), Dr Emma Tyldesley (Strathclyde U), and Dr Douglas Speirs (Strathclyde U), with the support of an advisory network of salmon researchers from Canada, the US, and Norway. The Marine Science group at Strathclyde, part of a larger Mathematics of Life Sciences group, consists of an community of approximately 20 researchers and PhD students from a variety of backgrounds. The student will be encouraged to attend local and overseas conferences and become part of international networks. The project is partly funded by the Fishmongers Company Charitable Fisheries Trust and aligned with larger research initiatives by the conservation charities Missing Salmon Alliance and Atlantic Salmon Trust.
The start date is 1 Oct 2025. A strong quantitative background (maths, programming) is required, and some knowledge of ecology or marine science is preferred but not required. The position is suitable for a student with a background in physics, computer science, or an allied field who wants to move into ecological applications, as well as students from a biology background with some modelling experience.
The position comes with 3 years of full support, including a stipend of £20,000-22,000 per year, funds to fully cover fees at the UK-student level, and a bursary for travel and other research expenses. We welcome applications from international applicants, but please note that fees for international students exceed the funding available by approximately £20,000 per year. Part-time study is an option, with a minimum of 50% of full-time effort required.
To apply, please start by sending 1) a letter explaining your background and interest, 2) a full CV, and 3) the names and contact info for three references to Dr Neil Banas, neil.banas@strath.ac.uk by 15 Mar 2025. We will conduct interviews by Zoom in April, and invite successful candidates to apply formally through the Strathclyde University web portal afterwards.
Droplets are of fundamental scientific interest across a wide range of scientific disciplines and applications. For example, in fog harvesting, water is collected from the condensation of fog onto netting to provide water for local communities. Droplets also play a key role in nature, such as in the adhesion of insects to solid surfaces, which is mediated by an oily secretion produced under their feet. Better understanding of droplet behaviour can therefore unlock exciting new technologies in adhesion science, improvements in the utility and efficacy of manufacturing techniques, and comprehension of fundamental aspects of mechanics and biology.
A key force in these scenarios is surface tension, which becomes increasingly important at the small scales associated with droplets. When droplets of liquid contact soft solid materials, these forces can deform the solids. This phenomenon is called elasto-capillarity, and it has been exploited to create a whole range of novel physical behaviours, including causing water to climb against gravity in a vertical thin gap and clumping arrays of deformable pillars during the fabrication of microstructured surfaces.
The dynamics of individual droplets bridging between deformable solids have been well studied in recent years; however, questions remain over how these droplets interact. This project aims to study this using mathematical models of droplets to explore how multiple droplet bridges interact through surface deformations.
The project will involve mathematically modelling the interaction of multiple liquid bridges in a variety of settings, before adding further competing physical processes. The goal is to develop mathematical models that can quantify the dynamic interactions of these droplets in key scenarios, and extract practical principles for future applications.
The student will join an active and supportive cohort of PhD students within the Continuum Mechanics and Industrial Mathematics (CMIM) group in the Department of Mathematics and Statistics. There will be learning and training opportunities to support the student’s career development, as well as the opportunity to attend conferences and workshops.
Applicants should have, or be expecting to obtain in the near future, a first class or good 2.1 honours degree (or equivalent) in mathematics or in a closely related discipline with a high mathematical content.
Candidates are expected to be from a discipline with a high mathematical content. Knowledge of continuum mechanics, and mathematical methods for the solution of partial differential equations is desirable.
This project will start by 1st October 2026, and applications will be considered as they are received. Early application is strongly encouraged.
This project addresses a major imaging problem from biomedical research, by working with a range of image modalities including standard histological analysis, multiplexed biomarker analysis with diverse applications across oncology, inflammatory, cardiovascular and metabolic diseases.
Imaging science is a relatively new branch of applied mathematics for emerging applications in almost all areas of cutting-edge research. The field is growing rapidly as new technologies are constantly driving the development.
This imaging project is an opportunity to undertake one of our new and exciting cross-disciplinary projects lying at the interface of mathematics and cancer medicine. The candidate does not need to have any knowledge in cancer diseases or medicine but will need strong mathematical knowledge through a degree in maths, or in computer science / physics / engineering with essential maths components, as well as good programming skills.
This project addresses a major imaging problem from biomedical research, by working with a range of image modalities including standard histological analysis, multiplexed biomarker analysis with diverse applications across oncology, inflammatory, cardiovascular and metabolic diseases. We will work with both public datasets and also real-life datasets from the collaborating company; visiting the company and being able to communicate and work with non-academics are expected for the project (for the latter some training will be given). Advanced geometry by way of PDEs or energy minimization functionals has been widely used in mathematical imaging. However, the aim of the project is to design vision-language frameworks, which uses mathematical models and deep learning methods to analyze image of cells and tissues and transfer flexibly to both vision-language understanding and generation tasks and describe the complexity in the tissue in the form of sentences.
To do this, the successful candidate will first be trained to understand current data analysis pipelines to co-register, collate and interrogate information across different data types and length-scales (resolution-wise). This is about multimodal imaging and involves variational models and deep learning algorithms for such multiscale problems. Then he or she will investigate how large language models (LLMs) work for these biomedical images by way of interpretation. LLMs have exhibited exceptional ability in language understanding, generation, interaction, and reasoning. Here language is used as a generic interface to connect vision-based AI models to solve AI tasks in the biomedical context.
The new method is expected to consider spatial location, cell multiple interactions and neighbourhood information, fundamentally different from traditional clustering methods such as K-means. From this project, a student can get trained in advanced mathematics, imaging models, biomedical knowledge, deep learning algorithms and novel natural language models. The achieved approach will serve as a key step toward advanced artificial intelligence in computational pathology and acceleration of new drugs development.
Overall, this studentship is attractive because the topic under study is interesting and modern and a candidate has the flexibility to focus on all or just one of the key components: maths, AI and NLP in terms of leading research. Apart from the collaboration opportunity with a top UK company, there is also a collaborating possibility with Universities of Glasgow and Liverpool which the primary supervisor is strongly associated with.
This is a fully funded scholarship by the University (covering both tuition fees and stipend for a UK student for 3.5 years) but the work will involve collaboration with the pharmaceutical company AZ (Astra-Zeneca). There is a strong possibility to add an industrial top payment on stipend beyond the level set equal to an UKRI studentship.
Applicants should have, or be expecting to obtain in the near future, a first class or good 2.1 honours degree (or equivalent) in mathematics or in a closely related discipline with a high mathematical content.
Classification of teas (types, quality grades, region of origin etc) has been examined by many researchers, using both chemical composition and more recently digital image analysis techniques to extract features from the image that are useful for classification.
Factors affecting the success of classification include the choice of features, the classifier and the imaging modality. Building on previous work at Strathclyde in collaboration with the EEE department, this project will allow the student to examine the choice of any of these to achieve optimal results. Intending students should have a strong statistical background and excellent computer skills, and be competent/be able to quickly become competent in the use of both R and Matlab.
Second supervisors: George Gettinby, Magnus Peterson
Worldwide losses of honey bee colonies have attracted considerable media attention in recent years and a huge amount of research. Researchers at Strathclyde have experience since 2006 of carrying out a series of surveys of beekeepers in Scotland (http://personal.strath.ac.uk/a.j.gray/) and now have 5 years of data arising from these surveys. These data have been used to estimate colony loss rates in Scotland and to provide a picture of beekeepers’ experience and management practices.
This project will examine the data in more detail than has been done so far, and is likely to involve data modelling and multivariate methods to identify risk factors. Part of the project will involve establishing the spatial distribution of various bee diseases.
This project will build on links with the Scottish Beekeepers’ Association and membership of COLOSS, a network linking honey bee researchers in Europe and beyond. Intending students should have a strong statistical background and excellent computer skills, and be competent/be able to quickly become competent in the use of R.
Introduction
In this project we shall look at how stochastic models can be used to describe how infectious diseases spread. We will start off by looking at one of the simplest epidemic models, the SIS (susceptible-infected-susceptible) model. In this model a typical individual starts off susceptible, at some stage catches the disease and after a short infectious period becomes susceptible again.
These models are used for diseases such as pneumococcus amongst children and sexually transmitted diseases such as gonorrhea amongst adults (Bailey, 1975). Previous work has already looked at introducing stochastic noise into this model via the disease transmission term (Gray et al. 2011). This is called environmental stochasticity which means introducing the random effects of the environment into how the disease spreads. This results in a stochastic differential equation (SDE) model which we have analysed. We have derived an expression for a key epidemiological parameter, the basic reproduction number.
In the deterministic model this is defined as the expected number of secondary cases caused by a single newly-infected individual entering the disease-free population at equilibrium. The basic reproduction number is different in the stochastic model than the deterministic one, but in both cases it determines whether the disease dies out or persists. In the stochastic SIS SDE model we have shown the existence of a stationary distribution and that the disease will persist if the basic reproduction number exceeds one and die out if it is less than one.
SDE Models with Environmental Stochasticity
We have also looked at other SDE models for environmental stochasticity. One of these took a simple deterministic model for the effect of condom use on the spread of HIV amongst a homosexual population and introduced environmental stochasticity into the disease transmission term. Again we found that a key parameter was the basic reproduction number which determined the behaviour of the system.
As before this was different in the deterministic model than the stochastic one. Indeed it was possible for stochastic noise to stabilise the system and cause an epidemic which would have taken off in the deterministic model to die out in the stochastic model (Dalal et al., 2007). Similar effects were observed in a model for the internal viral dynamics of HIV within an HIV-infected individual (Dalal et al., 2008).
Demographic Stochasticity
The real world is stochastic, not deterministic, and it is difficult to predict with certainty what will happen. Another way to introduce stochasticity into epidemic models is demographic stochasticity. If we take the simple homogeneously mixing SIS epidemic model with births and deaths in the population we can derive a stochastic model to describe this situation by defining p(i, j, t) to be the probability that at time t there are exactly i susceptible and j infected individuals and deriving the differential equations satisfied by these probabilities.
Then we shall look at how stochastic differential equations can be used to approximate the above set of equations for p(i, j, t). This is called demographic stochasticity and arises from the fact that we are trying to approximate a deterministic process by a stochastic one (Allen, 2007). Although the reasons for demographic and environmental stochasticity are quite different the SDEs which describe the progress of the disease are similar. The first project which we shall look at is analysis of the SIS epidemic model with demographic stochasticity along the lines of our analysis of the SIS epidemic model with environmental stochasticity.
Further Work
After this we intend to look at other classical epidemiological models, in particular the SIR (susceptible-infected-removed) model in which an individual starts off susceptible, at some stage he or she catches the disease and after a short infectious period he or she becomes permanently immune. These models are used for common childhood diseases such as measles, mumps and rubella (Anderson and May, 1991). We would look at introducing both environmental and demographic stochasticity into this model.
Other epidemiological models which could be analysed include the SIRS (susceptible-infected-removed-susceptible) epidemic model, which is similar to the SIR epidemic model, except that immunity is not permanent, the SEIS (susceptible-exposed-infected-susceptible) model which is similar to the SIS model, but includes an exposed or latent class, and the SEIR (susceptible-exposedinfected- removed) model, which similarly extends the SIR model.
We would also aim to look at other population dynamic models such as the Lotka-Volterra predator-prey model. There is also the possibility of developing methods for parameter estimation in all of these epidemiological and population dynamic models, and we have started work on this with another Ph. D. student (J. Pan).
References
1. E. Allen, Modelling with Itˆo Stochastic Differential Equations, Springer-Verlag, 2007.
2. R.M. Anderson and R.M. May, Infectious Diseases of Humans: Dynamics and Control, Oxford University Press, Oxford, 1991.
3. N.T.J. Bailey, The Mathematical Theory of Infectious Disease and its Applications, Second Edition, Griffin, 1975.
4. A.J. Gray, D. Greenhalgh, L. Hu, X.
5. N. Dalal, D. Greenhalgh and X. Mao, A stochastic model for AIDS and condom-use. J. Math. Anal. Appl. 325, 36-53, 2007.
6. N. Dalal, D. Greenhalgh and X. Mao, A stochastic model for internal HIV dynamics. J. Math. Anal. Appl. 341, 1084-1101, 2008.
It is possible for diseases to compete with each other so that one disease drives another to extinction. This phenomenon is called super-infection exclusion. We are collaborating with Dr. Valerie Odon, SIPBS (Strathclyde Institute of Pharmacy and Biomedical Sciences) and are interested in exploring when and how mosquito-only diseases, in particular densovirus can suppress dengue. Mosquito densoviruses are pathogenic viruses specific to mosquitoes. Kong et al. (2025) have shown that a mosquito being infected with a densovirus can significantly reduce the mosquito susceptibility to dengue virus serotype 2 in Aedes albopictus mosquitoes.
There are already mathematical models developed for superinfection exclusion; however, not for modelling insect-specific viruses. For example, Bremermann and Thieme discuss a multi-strain SIR model (1989). They showed the system will tend to the long-term endemic equilibrium corresponding to the strain with the highest basic reproduction number. Cai et al. extended this to a host-vector system using an SIR model for the host and an SI model for the vector (2013). They showed the strain with the highest partial host and vector reproductive numbers will outcompete other hosts. Glover and White examine a host-vector model to assess the impact of superinfection exclusion on vaccination using dengue and yellow fever as examples (2020). We would use existing mathematical models of host vector systems to guide our model formulation and use data from these experiments to parameterise and elucidate the potential of this method in curbing arbovirus transmission.
References
Bremerman, H. J. and Thieme, H. R., (1989) A competitive exclusion principle for pathogen virulence. Journal of Mathematical Biology, 27(2), 179-190.
Cai, L.M., Martcheva,M. and Li, X.-Z. (2013) Competitive exclusion in a vector-host epidemic model with distributed delay. Journal of Biological Dynamics, 7(1), 47-67.
Glover, A. and White, A. (2020) A vector-host model to assess the impact of superinfection exclusion on vaccination strategies using dengue and yellow fever as case studies. Journal of Theoretical Biology, 484, 110014.
Kong, L., Xiao, J., Yang, L., Sui, Y., Wang, D., Chen, S., Liu, P., Xiao-Guang, C. and Gu, J. (2023) Mosquito densovirus significantly reduces the vector susceptibility to dengue virus serotype 2 in Aedes albopictus mosquitoes (Diptera: Culicidae), Infectious Diseases of Poverty, 12(1):48.
Differential equation models are commonly used to model infectious diseases. The population is divided up into compartments and the flow of individuals through the various classes such as susceptible, infected and removed is modelled using a set of ordinary differential equations (Anderson and May, 1991, Bailey, 1975). A basic epidemiological parameter is the basic reproduction number. This is defined as the expected number of secondary cases produced by a single newly infected case entering a disease-free population at equilibrium (Diekman and Heesterbeek). Typically the disease takes off if R0 > 1 and dies out if R0≤1.
However media awareness campaigns are often used to influence behaviour and if successful can alter the behaviour of the population. This is an area which has not been studied much until recently. The student would survey the existing literature on media awareness models in the literature and with the supervisor formulate mathematical models using differential equations for the effect of behavioural change on disease incidence. These would be examined using both analytical methods and computer simulation with parameters drawn from real data where appropriate. The mathematical techniques used would be differential equations, equilibrium and stability analyses and computer simulation.
References
1. R.M. Anderson and R.M. May, Infectious Diseases of Humans: Dynamics and Control, Oxford University Press, Oxford, 1991.
2. N.T.J. Bailey, The Mathematical Theory of Infectious Disease and its Applications, Second Edition, Griffin, 1975.
3. O. Diekmann, J. A. P. Heesterbeek and J. A. J. Metz, On the definition and computation of the basic reproduction number R0 for infectious diseases in heterogeneous populations. J. Math. Biol. 28, 365-382.
4. A. K. Misra, A. Sharma and J. B. Shukla, Modelling and analysis of effects of awareness programs by media on the spread of infectious diseases. Math. Comp. Modelling 53, 1221- 1228.
5. A. K. Misra, A. Sharma, V. Singh, Effect of awareness programs in controlling the prevalence of an epidemic with time delay, J. Biol. Systems, 19(2), 389-402,
In recent years there has been much work on reaction-diffusion equations in which the diffusion mechanism is not the usual Fickian one. Examples are integro-differential equations, porous media type equations, pseudodifferential equations, p-Laplacian type equations and prescribed curvature type (saturating flux) equations.
The motivation for this work comes from material science and mathematical ecology. However, there are applied contexts where these diffusion mechanisms have never been considered. One is in the area of combustion and the other is in the area of regularised conservation laws and shock propagation. This project, which would build on the work I did through the years on integrodifferential models and recently with M. Burns on the prescribed curvature equations, will use PDE, asymptotic, and topological methods to explore the dynamics of blowup and of shock propagation in canonical examples of reaction equations and nonlinear scalar conservation laws regularised by non-Fickian diffusion terms.
References:
[1] M. Burns and M. Grinfeld, Steady state solutions of a bistable quasilinear equation with saturating flux, European J. Appl. Math. 22 (2011), 317-331.
[2] M. Burns and M. Grinfeld, Steady state solutions of a bistable quasilinear equation with saturating flux, European J. Appl. Math. 22 (2011), 317-331.
Recently, a new class of model has been developed to describe, for example, phase separation in materials such as binary alloys. These take the form of integrodifferential equations. Coarsening, that is, creation of large scale patterns in such models is poorly understood.
There are partial results [1, 2] that use the maximum principle, while for most interesting problems such a tool is not available. This will be a mixture of analytic and numerical work and will need tools of functional analysis and semigroup theory.
References:
[1] D. B. Duncan, M. Grinfeld, and I. Stoleriu, Coarsening in an integro-differential model of phase transitions, Euro. J. Appl. Math. 11 (2000), 561-572.
[2] V. Hutson and M. Grinfeld, Non-local dispersal and bistability, Euro. J. Appl. Math. 17 (2006), 221-232.
Up to 2002, most of the existing strong convergence theory for numerical methods requires the coefficients of the SDEs to be globally Lipschitz continuous [1]. However, most SDE models in real life do not obey the global Lipschitz condition. It was in this spirit that Higham, Mao and Stuart in 2002 published a very influential paper [2] (Google citation 319) which opened a new chapter in the study of numerical solutions of SDEs---to study the strong convergence question for numerical approximations under the local Lipschitz condition.
Since the classical explicit Euler-Maruyama (EM) method has its simple algebraic structure, cheap computational cost and acceptable convergence rate under the global Lipschitz condition, it has been attracting lots of attention.
Although it was showed that the strong divergence in finite time of the EM method for SDEs under the local Lipschitz condition, some modified EM methods have recently been developed these SDEs. For example, the tamed EM method was developed in 2012 to approximate SDEs with one-sided Lipschitz drift coefficient and the linear growth diffusion coefficient. The stopped EM method was developed in 2013. Recently, Mao [3] initiated a significantly new method, called the truncated EM method, for the nonlinear SDEs. The aim of this PhD is to develop the truncated EM method. The detailed objectives are:
(1) To study the strong convergence of the truncated EM method in finite-time for SDEs under the generalised Khasminskii condition and its convergence rate.
(2) To use the truncated EM method to investigate the stability of the nonlinear SDEs. Namely to study if the numerical method is stochastically stable when the underlying SDE is stochastically stable and to study if we can infer that the underlying SDE is stochastically stable when the numerical method is stochastically stable for small stepsize.
A PhD studentship might be available for the project.
References:
[1] Mao X., Stochastic Differential Equations and Applications, 2nd Edtion, Elsevier, 2007.
[2] Higham D., Mao X., Stuart A., Strong convergence of Euler-type methods for nonlinear stochastic differential equations, SIAM J. Numer. Anal. 40(3) (2003), 1041--1063.
[3] Mao X., The truncated Euler-Maruyama method for stochastic differential equations, J. Comput. Appl. Math. 290 (2015), 370--384.
Cholesteric liquid crystals are chiral systems which possess a spontaneously formed helical structure with the pitch in micron range which is important for various applications in optics and nanophotonics. In recent years the interest has shifted in the direction of lyotropic cholesterics which are the solutions of various chiral macromolecules, viruses or chiral nanocrystals. These systems are important for biology (for example, cholesteric states of DNA) and also provide some very useful natural anisotropic chiral materials.
Among the most interesting resources to explore are cellulose and chitin, key biopolymers in the plant and animal world, respectively. Both have excellent mechanical properties and can be extracted as nanorods with high degree of crystallinity.Both are also chiral. Molecular chirality of such nanorods is amplified into a helically modulated long-range ordered cholesteric liquid crystal phase when they are suspended in water.
The aim of this project is to develop a molecular-statistical theory of chirality transfer in cholecteric nanorod phase, determined by steric and electrostatic chiral interactions, and quantitatively describe the variation of helix pitch as a function of rod length, concentration, dispersity and temperature. The theory will be built upon the previous results, obtained for different cholesteric liquid crystals ((see some references to our work below).
The project will include a collaboration with two experimental group at the University of Luxembourg and the University of Stuttgart. These groups have an enormous expertise in the field of lyotropic liquid crystals.
[1] Honorato-Rios, C., Lehr, C., Sch¨utz, C., Sanctuary, R., Osipov, M. A., Baller, J. and Lagerwall, J. P.F.., Fractionation of cellulose nanocrystals: enhancing liquid crystal ordering with- out promoting gelation, Asia Materials, 10, 455–465 (2018).
[2]. Dawin, Ute C., Osipov, Mikhail A. and Giesselmann, F. Electrolyte effects on the chiral induction and on its temperature dependence in a chiral nematic lyotropic liquid crystal J of Phys. Chem. B, 114 (32). 10327-10336 (2010)
[3] A. V. Emelyanenko, M. A. Osipov and D. A. Dunmur, ] Molecular theory of helical sense inversions in chiral nematic liquid crystals Phys. Rev. E, 62, 2340 (2000)
Elastic constants of nematic liquid crystals, which describe the energy associated with orientational deformation of such anisotropic fluids, are among the most important parameters for various applications of liquid crystal materials. The elastic constants of nematic liquid crystals have been well investigated both experimentally and theoretically in the past. During the past decade a number of novel liquid crystals materials with unconventional molecular structure have been investigated and it has been found that these systems are characterised by the anomalous values and behaviour of the elastic constants. In particular, it has been shown that in the nematic phase exhibited by the so-called V-shaped bent-core liquid crystals two of the three elastic constants decrease nearly to zero with the decreasing temperature. This behaviour is still very poorly understood.
It should be noted that bent-core liquid crystals attract a very significant attention at present because they also exhibit a number of unusual novel phases with a nanoscale helical structure. It is now generally accepted that a transition into these unusual phases may be driven by the dramatic reduction of the elastic constants.
The aim of this project is the generalise the existing molecular-statistical theory of elasticity of nematic liquid crystals to the case of bent-core nematics, composed of biaxial and polar molecules, using the preliminary results obtained in recent years (see, for example, the references to some of our recent papers given below). Another aim is to explain the existing experimental data on the temperature variation of the elastic constants of bent-core nematic liquid crystals.
The project will include a collaboration with the experimental group at the University of Leeds and the theoretical group from Russian Academy of Sciences. These collaborations are very important for the success of the project.
[1] M. A. Osipov and G. Pajak, Effect of polar intermolecular interactions on the elastic constants of bent-core nematics and the origin of the twist-bend phase, The European Physical Journal E 39, 45 (2016).
[2] M. A. Osipov and G. Pajak, Polar interactions between bent-core molecules as a stabilising factor for inhomogeneous nematic phases with spontaneous bend deformations, Liquid Crystals 44, 58 (2016).
[3] S. Srigengan, M. Nagaraj, A. Ferrarini, R. Mandle, S.J. Cowling, M.A. Osipov, G. Pająk, J.W. Goodby and H.F. Gleeson, J. Mater. Chem. C, 2013, 6, 980
The development of AI-based methodology and architectures to improve financial, environmental and health care prediction accuracy, enhance data reliability, and support evidence-based policy has become the priorities of many countries including the UK.
This project aims to provide novel time series machine learning (TSML) methodology for imputation of missing values and forecasting high-dimensional data. The research is particularly innovative for the discrete-valued case, because there is a lack of such research in literature, to the best of our knowledge. We will work on the following two aspects of modelling high-dimensional time series.
(i) Imputation of missing data in high-dimensional time series
High-dimensional data are common in fields such as finance, healthcare, and environmental science. They are normally recorded in time order to form high-dimensional time series datasets, for example, air pollution data across many locations. But they contain inevitably missing values, which hinder application of many analytical and statistical methods. Effective handling of these gaps is therefore essential before model development. In ultra-high-dimensional settings, filling in missing entries presents significant challenges for machine learning and statistical approaches. While existing techniques (e.g. Obata et al. (2024)) only suit the low-dimensional and continuous-valued case, the research for the high-dimensional or/and discrete-valued cases is in demand recently. In this project, relationships between components (so-called network structure) and temporal dependency are used jointly to obtain accurate imputation. We will introduce a novel framework that models evolve inter-correlations through Markov regime-switching network with large number of nodes, temporal dynamics by a state-space formulation (e.g. Fan et al (2020)), dimension reduction via factor models. We will also develop self-exciting spatio-temporal models for imputation, under assumption that the imputed data follows a nested family of continuous and discrete distributions, not only normal distributions.
(ii) Machine learning architectures for accurate and robust forecasting of high-dimensional time series
For imputed data, we develop new machine learning and statistical models for forecasting high-dimensional time series. We will develop deep learning models related to temporal convolutional networks and transformers, improve existing methods (e.g. Fan et al (2020), Obata et al. (2024)) and extend them to high-dimensional and discrete-valued cases by using transformer-based architectures, factor models (ref. Liu et al. (2025), Pan and Yao (2008)) and recent advances in probabilistic and statistical hybrid approaches. We also propose dynamic uncertainty quantification, combining Bayesian inference and quantile regression to enhance robustness and achieve probabilistic forecasting (ref. Dvijotham et al. (2023)).
The objectives are: To develop machine learning architectures for high-dimensional time series modelling to improve accuracy and robustness in forecasting; To detect anomalies and impute missing data in high-dimensional time series with minimal errors.
We will validate the proposed models through real-world case studies in forecasting, anomaly detection, and decision support. Application areas include: financial forecasting and risk modelling across interconnected markets; public health monitoring across multiple regions and large healthcare systems; and trend analysis of air and water pollution across districts. Relevant datasets will be drawn from public sources, industrial partners, and healthcare collaborators to ensure sufficient high-dimensional data for model evaluation. We will show that deployment of the proposed techniques makes imputing and forecasting possible and accurate in each of the applications.
The following outcomes are expected: (i) Publications in top-tier journals and conferences. (ii)Open-source time series AI models for imputation and forecasting. (iii) Deployment-ready prototypes for selected applications.
Applicants with first-class or upper-second-class honours degree in Statistics or Applied Mathematics or Econometrics. Applicants with distinction MSc in Mathematics and Statistics or Econometrics. Outstanding oversea applicants with an equivalent degree.
The position comes with 3 years of full support, including a stipend of £20,780 per year (2025-26). We welcome applications from international applicants, but please note that fees for international students exceed the funding available by approximately £20,000 per year.
We wish to develop innovative methods for modelling high-dimensional time series. Practical time series data, including both continuous-valued and discrete-valued data such as climate record data, medical data, and financial and economic data, are used for empirical analysis.
Models for forecasting multivariate conditional mean and multivariate conditional variance (volatility) are concerned. Techniques for dimension reduction, such as dynamic factor analysis, are used.
The estimation of models for panel data analysis and the option valuation with co-integrated asset prices is discussed.
Nowadays people often meet problems in forecasting a functional. A functional may be a curve, a spatial process, or a graph/image. In contrast to conventional time series analysis, in which observations are scalars or vectors, we observe a functional at each time point; for example, daily mean-variance efficient frontiers of portfolios, yield curves, annual production charts and annual weather record charts.
Our goal is to develop new models, methodology and associated theory under a general framework of Functional Time Series Analysis for modelling complex dynamic phenomena. We intend to build functional time series models and to do forecasting.
When the true economic system consists of many equations, or our economic observations have a very high dimension, one may meet the ``curse of dimensionality" problem. We try to impose a common factor structure to reduce dimension for the parametric and nonparametric stability analysis of a large system. Replacing unobservable common factors by principle components in parametric and nonparametric estimation will be justified.
In contrast to conventional factor models which focuses on reducing dimensions and modeling conditional first moment, the proposed project devotes attention to dimensional reduction and statistical inference for conditional second moments (covariance matrices). The direct motivation lies in the increasing need to model and explain risk and uncertainty of a large economic system.
The other distinctive point is that the proposed project considers factor models for high frequency data. A key application is the analysis of high dimensional and high-frequency financial time series, although the potential uses are much wider.
Many fish populations worldwide have been heavily exploited and there is accumulating evidence from both observational and theoretical studies that this harvesting can induce evolutionary changes. Such responses can affect the stock sustainability and catch quality, and so there is a recognized need for new management strategies that minimise these risks. Most results suggest that high mortality on larger fish favours early maturation.
However, recent theoretical work has shown that trade-offs between growth and maturation can lead to more complex evolutionary responses. Surprisingly, harvesting large fish can select for either late or early maturation depending on the effect of maturation on growth rate. To date most theoretical studies have used evolutionary invasion analyses on simple age-based discrete-time models or on continuous-time coupled ODE representations of size structure.
In common with many generic models of fish population dynamics, population control occurs by unspecified density-dependence at settlement. While these simplifications carry the advantage of analytical tractability, the analysis assumes steady state populations. This, together with the stylised life-histories precludes comparing model results with field data on secular changes in size-distributions an sexual maturity.
The work in this project will develop a new generation of testable model for fisheries-induced adaptive changes with the potential to inform future management decisions. This will involve developing a consumer-resource model which a length-structured fish population feeding on a dynamic biomass spectrum.
Differently-sized fish will compete for food by exploiting overlapping parts of the food size spectrum. The population will be partitioned by length at maturity, and this will be the heritable trait under selection. The model will be used to explore how changes in mortality and food abundance affect the evolutionarily stable distribution of maturation lengths.
Comparisons with survey data on North Sea demersal fish will be used to assess whether the historical harvest rates are sufficient to explain growth rate changes as an evolutionary response. Finally the evolutionarily stable optimal harvesting strategies will be identified.