3 Ways Automated Speech Recognition (ASR) Can Foster Digital Inclusion
Thomas Dieste, 08 February 2019
Thomas Dieste, 08 February 2019
In the EU the directive on digital inclusion of the websites and mobile applications of public sector bodies’ (EU2016/2102) was put into place. This directive demands public organizations to become more inclusive by making all their openly published content accessible to people with disabilities. This group includes approximately 50 – 75 million citizens and represents 10-15% of the entire population of the 27 EU member states. What can Automated Speech Recognition do to help in that?
Especially for video and audio content that organizations present publicly (on their websites, on applications or on intranets), they can take measures that will grant equal access to the content for everyone.
Technologies such as text-to-speech can help people with visual impairments (blindness, bad eyesight) to easily navigate websites and content by using their auditory senses and by ‘listening to’ what is written on websites.
The other way around, technologies such as speech-to-text can help organizations to make content accessible for people with auditory impairments (people who are deaf or hard of hearing).
Automated speech recognition has reached such a high level of accuracy that it can vastly speed up the process of generating accurate transcripts for video and audio content.
Also subtitling and captioning of video files can be automated to a large degree of the process.
For high-quality audio files such as podcasts or professional e-learnings, automated speech recognition has reached a high level of accuracy. In certain cases, the accuracy can be high enough to create fully automated transcripts while maintaining the essence of what has been said. Speech-to-text software can be used to create a transcript of the audio within minutes.
Compared to manual labour (which takes often around 5-8 times as long as the duration of the audio), an automatically generated transcript is readily available within a small fraction of that time.
Transcripts are the most cost-effective way to create alternative content pieces for people with auditory impairments. Instead of listening to the podcast, deaf people can simply read the transcript. Next, the transcript gives your SEO efforts a nice boost 🙂
Automated speech recognition can also help to create subtitles for video content. By uploading a video file into Amberscript, you receive automatically-generated subtitles immediately. In the online editor, you can make changes to the subtitles, adjust timestamps and download the file as SRT so that you can easily feed it into your video editing tool afterward.
By automating large parts of the subtitling process, video can be made digitally accessible in a very quick way.
Videos that are properly subtitled give deaf people the possibility to put the text into the context of moving images. The viewer can then not only see who is talking but can also understand what has been said in which context.
Offering accessible content for video and audio can pose an operational challenge that costs a lot of capacity and time. In case you don’t have the internal capacity to make all your content available, external parties can help to generate transcripts, create subtitles and make sure that all WCAG 2.0 standards are met. This ensures compliance with the EU directive on digital inclusion and makes your website accessible to a big part of the population.
Internet universality's definition relates to the equality of access to the digital environment. Find out how to promote web universality!
You can make your podcasts accessible to the deaf and hard of hearing by providing transcriptions and creating video podcasts. Find out how!
Which subtitles should you use for Digital Accessibility? Learn how Sdh subtitles and closed captions can make you content WCGA-proofed.