Researchers develop ‘deep learning’ subsea survey system

Oil rig in the sea

Researchers are developing a ‘deep learning’ underwater survey system that can automatically inspect and identify potential problems with oil and gas pipelines.

Presently subsea surveys of oil and gas pipelines involve human operators piloting remotely-operated vehicles (ROV) mounted with cameras which scan pipes looking for leaks or potential dangers to infrastructure such as boulders or fishing nets.

The task of identifying and annotating these ‘events’ in the video footage, whilst time consuming, is relatively straight forward for an expert operator. However, results are frequently affected by sand agitation, sea-life or vegetation.

Inspection expertise

Researchers from the University of Strathclyde and subsea service provider N-Sea hope to automate the interpretation of the video footage using ‘deep learning’ techniques.

With the support of The Data Lab Innovation Centre, the team has combined the inspection expertise of N-Sea and the innovative data analytics research at the Institute of Sensors Signals and Communications in the Department of Electronic and Electrical Engineering to develop an algorithm which annotates video frames automatically and in real-time. 

The approach taken by the project team utilises ensembles that combine the output of three classifiers operating on the three video streams, port, starboard and centre independently. 

David Murray, Survey and Inspection Data Centre Manager at N-Sea, said: “Recently a number of automatic video annotation approaches have been announced. However, these have been demonstrated in clear waters, using bespoke and vendor specific camera systems that mitigate motion blur and poor image quality, through strobed lighting and high shutter speeds. 

“Although these technological advancements in the equipment are beneficial, the vast majority of working class ROVs are still equipped with standard cameras operating in murky waters.”

To address this, the project has developed a 24-layer convolutional neural network to identify features in the video frames. It currently supports a number of events such as burial, exposure and field joints with high classification accuracy on still images. Combining predictions on a number of consecutive frames boosts further the network performance.

Dr Christos Tachtatzis, Lecturer and Chancellor's Fellow at the University of Strathclyde, said: “Prior state-of-the-art approaches demand high picture clarity and high visibility to operate effectively.

We have purposely trained and validated the model using video footage typically acquired by ROVs, making the model applicable to the wider subsea survey community.”

The partners are currently in the process of establishing a follow-on project to increase the Technology Readiness Level of the model and permit the easy adoption by the inspection industry.

Gillian Docherty, The Data Lab CEO, said: “The Data Lab seeks to empower Scottish businesses and individuals to harness the full potential of data. Collaborative innovation projects are a key component of our work, and we were more than happy to partner with N-Sea and the University of Strathclyde to achieve their goal. 

“Through the use of data science, we have worked together to develop an algorithm that will be the catalyst for a step-change in how the industry approaches sub-sea inspection in a challenging environment, a fantastic result. Scotland is certainly seizing the data opportunity and it is collaborations such as this which bolster our international reputation for excellence in data science skills and innovation.”