In the past year, the Parliament and Amberscript have developed together a speech recognition model specifically trained to recognize political speech in the parliament. Several hundreds of hours of recordings of the parliament have been used to train the AI-based speech recognition.
The technology makes use of deep neural networks and was designed to specifically understand parliamentary speech, local accents and political topics. Up to this point, speech recognition was used to create parliamentary meeting minutes in a more efficient way.
To leverage the large potential of this technology even further and increase the inclusivity of politics within the state, the Parliament had the vision to automatically create subtitles for their live video stream in which citizens can follow the debates within the parliament. Subtitles allow individuals with auditory impairments to understand the debates more easily.
After jointly developing the solution, the first debates were streamed on 28th, 29th and 30th of October with automatically generated live-subtitles.
The streaming was an astonishing success and for many speakers, an accuracy of more than 90% was measured, which makes it possible for deaf and hard of hearing to closely follow the speakers.
The technology is still being refined, but the Parliament and Amberscript will keep working together to improve the live subtitles even further in the following years to fulfil their vision of publishing all videos in an inclusive manner and breaking down barriers for citizens with (auditory) impairments.
All following sessions will be available on the website of the Parliament: https://www.landtag-mv.de/aktuelles
For more information about this and other solutions, please contact firstname.lastname@example.org