New lip-reading technology to catch inaudible audio

London, March 25 (IANS) Scientists from the University of East Anglia (UEA) have developed a new lip-reading technology that can help in solving crimes and provide communication assistance for people with hearing and speech impairments.

The visual speech recognition technology, created by Dr Helen L. Bear and professor Richard Harvey, can be applied “any place where the audio isn’t good enough to determine what people are saying.”

Unique problems with determining speech arise when sound isn’t available such as on CCTV footage or if the audio is inadequate and there are no clues to give the context of a conversation.

“We are still learning the science of visual speech and what it is people need to know to create a fool-proof recognition model for lip-reading, but this classification system improves upon previous lip-reading methods by using a novel training method for the classifiers,” Dr Bear explained.

Potentially, a robust lip-reading system could be applied in a number of situations from criminal investigations to entertainment.

Lip-reading has been used to pinpoint words footballers have shouted in heated moments on the pitch, but is likely to be of most practical use in situations where are there are high levels of noise, such as in cars or aircraft cockpits.

“Such a system could be adapted for use for a range of purposes like for people with hearing or speech impairments. Alternatively, a good lip-reading machine could be part of an audio-visual recognition system,” Dr Bear added.

Lip-reading is one of the most challenging problems in artificial intelligence so it’s great to make progress on one of the trickier aspects “which is how to train machines to recognise the appearance and shape of human lips,” Harvey noted.

The findings were scheduled to be presented at the International Conference on Acoustics, Speech and Signal Processing (ICASSP) in Shanghai on Friday.

The paper was published in the journal Proceedings of the IEEE International Conference on Acoustics, Speech and Signal Processing 2016.

Leave a Reply

Please enter your comment!

The opinions, views, and thoughts expressed by the readers and those providing comments are theirs alone and do not reflect the opinions of www.mangalorean.com or any employee thereof. www.mangalorean.com is not responsible for the accuracy of any of the information supplied by the readers. Responsibility for the content of comments belongs to the commenter alone.  

We request the readers to refrain from posting defamatory, inflammatory comments and not indulge in personal attacks. However, it is obligatory on the part of www.mangalorean.com to provide the IP address and other details of senders of such comments to the concerned authorities upon their request.

Hence we request all our readers to help us to delete comments that do not follow these guidelines by informing us at  info@mangalorean.com. Lets work together to keep the comments clean and worthful, thereby make a difference in the community.

Please enter your name here