Amsterdam, 17 May 2017 - Never before have we seen such rapid improvements in automatic translation of both text and speech. The ‘neural wave’ is mind-blowing and overwhelming. It’s like a whole new sandbox for the researchers, even though they are often puzzled as much as all of us what sparked these rapid improvements. The machines have become self-learning and take a thousand decisions to arrive at better results.
What’s more: the Deep Learning technology takes any data - monolingual, bilingual, audio and even videos - to build versatile engines that can do lip reading (and translation) and even translate between languages for which no direct bilingual data were available. The machine translations are more fluent and natural than what we are used to today, hiding potential inaccuracies from our blind eyes. Neural MT has lent itself very well to the emergence of more speech-to-speech translation apps, thanks to the fluency features and the lower burden on storage capacity.
Join us on May 30 for a 70-minute webinar with Macduff Hughes from Google, Chris Wendt from Microsoft and Jean Senellart from Systran who will share their passion and insights about the latest developments in their companies.