AI@StrathclydeAI & Software Engineering

We are looking at how artificial intelligence techniques can be used to support the process of software engineering with the aim of creating systems that are robust, reliable and adaptable. We are also looking at the converse problem of how AI systems should be engineered, which is particularly important as AI technologies are becoming commonly deployed in critical situations.

Research themes within this topic include:

  • Test Data Generation: For some time now, it has been established that AI search-based strategies are capable of generating effective sets of test data for a system, freeing the software engineer from this tedious task. But there are still real challenges such as generating test data that is meaningful to the engineer, and testing non-traditional systems (such as AI systems themselves).
  • Test Outcome Classification: While it is possible to generate large volumes of test data, the output from the system under test still needs to be checked. To this end we are investigating how AI-based techniques such as anomaly detection can be used to effectively distinguish between passing and failing tests.
  • Fault Localisation: If we detect a failing test, how do we identify the associated code that needs to be fixed? Approaches under investigation here involve either looking at the different paths through the system taken by passing and failing tests, or using text-based analysis techniques to work from user-supplied bug reports.
  • Systems Evolution: Once we have detected a fault, how do we go about fixing it? Or alternatively, how can we rapidly respond to changes in requirements? Search-based AI techniques have the potential to evolve code and effectively search through a huge number of potential solutions to generate fixes or new system configurations that autonomously adapt to internal errors or external environmental changes.

Representative publications

  • Marc Roper:
    Using Machine Learning to Classify Test Outcomes. AITest 2019
  • Rafig Almaghairbe, Marc Roper:
    An Empirical Comparison of Two Different Strategies to Automated Fault Detection: Machine Learning Versus Dynamic Analysis. ISSRE Workshops 2019: 378-385
  • Rafig Almaghairbe, Marc Roper:
    Separating passing and failing test executions by clustering anomalies. Qual. J. 25(3): 803-840 (2017)
  • Rafig Almaghairbe, Marc Roper:
    Automatically Classifying Test Results by Semi-Supervised Learning. ISSRE 2016: 116-126
  • Rafig Almaghairbe, Marc Roper:
    Building Test Oracles by Clustering Failures. AST@ICSE 2015: 3-7
  • Steven Davies, Marc Roper:
    Bug localisation through diverse sources of information. ISSRE (Supplemental Proceedings) 2013: 126-131