Dr Andrew Abel

Strathclyde Chancellor's Fellow

Computer and Information Sciences

Contact

Back to staff profile

Publications

Gabor-based audiovisual fusion for Mandarin Chinese speech recognition
Xu Yan, Wang Hongce, Dong Zhongping, Li Yuexuan, Abel Andrew
2022 30th European Signal Processing Conference (EUSIPCO) 30th European Signal Processing Conference, EUSIPCO 2022 2022 30th European Signal Processing Conference (EUSIPCO), pp. 603-607 (2022)
https://doi.org/10.23919/eusipco55093.2022.9909634
Classroom engagement analysis and feedback using MER
Devasya Ashwitha, Abel Andrew, Wang Yuchen, Jia Laibing
BERA Annual Conference 2025 (2025)
TranscribeSight : redefining ASR evaluation for Industry 4.0 knowledge capture
Alsayegh Ali, Ul Ain Noor, Abel Andrew, Jia Laibing, Masood Tariq
Procedia Computer Science (2025)
Continuous valence-arousal space prediction and recognition based on feature fusion
Ayoub Misbah, Zhang Haiyang, Abel Andrew
2024 IEEE International Conference on Industrial Technology (ICIT) 2024 IEEE International Conference on Industrial Technology (ICIT) IEEE International Conference on Industrial Technology (ICIT) Vol 2024 (2024)
https://doi.org/10.1109/icit58233.2024.10540915
Maximum Gaussianality training for deep speaker vector normalization
Cai Yunqi, Li Lantian, Abel Andrew, Zhu Xiaoyan, Wang Dong
Pattern Recognition Vol 145 (2024)
https://doi.org/10.1016/j.patcog.2023.109977
Optimization of lacrimal aspect ratio for explainable eye blinking
Ayoub Misbah, Abel Andrew, Zhang Haiyang
Intelligent Systems and Applications Intelligent Systems Conference 2023 Lecture Notes in Networks and Systems Vol 824, pp. 175-192 (2024)
https://doi.org/10.1007/978-3-031-47715-7_13

More publications

Back to staff profile

Professional Activities

MND Scotland (External organisation)
Member
2/6/2025
Remember the Humans
Speaker
8/4/2025
MND Scotland (External organisation)
Member
5/2/2025
Multimodal Communication: Machine Learning for Speech and Lipreading
Invited speaker
29/11/2024

More professional activities

Projects

Developing teaching professionals’ knowledge and skills in AI technologies for inclusive and equitable education
Wang, Yuchen (Principal Investigator) Abel, Andrew (Co-investigator) Jia, Laibing (Co-investigator) Noor Ul Ain, N (Researcher) Mevawalla, Zinnia (Co-investigator) Catlin, Jane (Co-investigator) Devasya, Ashwitha (Researcher) Tripathy, Aditi (Researcher) Nikou, Stavros (Co-investigator) Roxburgh, David (Co-investigator) Robinson, Deborah (Co-investigator)
This project is to co-design a professional learning programme on AI and inclusive education with teachers and student teachers, funded by EPSRC (£9,961).
01-Jan-2025 - 31-Jan-2026
AI-Based Translation of Dog Behaviour, Emotion, and Intention
Abel, Andrew (Principal Investigator) Zhao, Yingying (Co-investigator)
A feasibility study into the use of AI to analyse dog behaviour. Funded by Innovation Voucher.
20-Jan-2025 - 19-Jan-2025
AI-Based Translation of Dog Behaviour, Emotion, and Intention
Abel, Andrew (Principal Investigator) Zhao, Yingying (Co-investigator)
20-Jan-2025 - 19-Jan-2025
Human-centred AI Technologies for Inclusion in Education and Society
Abel, Andrew (Principal Investigator) Wang, Yuchen (Principal Investigator) Jia, Laibing (Principal Investigator)
Strathclyde Centre for Doctoral Training 2023 (approx. £260,000)
01-Jan-2023 - 30-Jan-2027
Towards visually-driven speech enhancement for cognitively-inspired multi-modal hearing-aid devices (AV-COGHEAR)
Abel, Andrew (Research Co-investigator)
This ambitious project aimed to develop a new generation of hearing aid technology that extracts speech from noise by using a camera to see what the talker is saying. The wearer of the device will be able to focus their hearing on a target talker and the device will filter out competing sound. This ability, which is beyond that of current technology, has the potential to improve the quality of life of the millions suffering from hearing loss (over 10m in the UK alone). Our approach was consistent with normal hearing. Listeners naturally combine information from both their ears and eyes: we use our eyes to help us hear. When listening to speech, eyes follow the movements of the face and mouth and a sophisticated, multi-stage process uses this information to separate speech from the noise and fill in any gaps. Our proposed approach was much the same. We exploit visual information from a camera, and novel algorithms for intelligently combining audio and visual information, in order to improve speech quality and intelligibility in real-world noisy environments.
01-Jan-2016 - 31-Jan-2019

More projects

Back to staff profile

Contact

Dr Andrew Abel
Strathclyde Chancellor's Fellow
Computer and Information Sciences

Email: andrew.abel@strath.ac.uk
Tel: Unlisted