Google today is announcing that web and mobile versions of Google Translate are now using a new neural machine translation system for all translations from Chinese into English — and the app conducts those translations about 18 million times a day. Google is also publishing an academic paper on the method.
Previously Google has said that it uses neural networks in Google Translate, but specifically for its real-time visual translation feature. And earlier this year Google senior fellow Jeff Dean told VentureBeat that Google was working on incorporating deep learning into more of Google Translate. Sure enough, today’s contributions are the result of that work, a spokesperson told VentureBeat in an email.
Google has been incorporating deep neural networks into more and more of its applications, including Google Allo and Inbox by Gmail. It’s also helping Google more efficiently run its data centers.
For Google neural machine translation (GNMT), the company is relying on eight-layer long short-term memory recurrent neural networks (LSTM-RNNs), “with residual connections between layers to encourage gradient flow,” the Google researchers wrote in the paper. Once the neural networks have been sufficiently trained with the help of graphics processing units (GPUs), Google relies on its recently unveiled tensor processing units (TPUs) to make inferences about new data.
AI Weekly
The must-read newsletter for AI and Big Data industry written by Khari Johnson, Kyle Wiggers, and Seth Colaner.
Included with VentureBeat Insider and VentureBeat VIP memberships.
Neural machine translation has not always been optimal, but Google’s implementation has shown advantages in certain situations.
“Human evaluations show that GNMT has reduced translation errors by 60 percent compared to our previous phrase-based system on many pairs of languages: English↔French, English↔Spanish, and English↔Chinese,” the researchers wrote in the paper. “Additional experiments suggest the quality of the resulting translation system gets closer to that of average human translators.”
In a blog post today, Google Brain team research scientists Quoc Le and Mike Schuster noted that translation errors were actually down 55-85 percent “on several major language pairs measured on sampled sentences from Wikipedia and news websites with the help of bilingual human raters.”
Even so, the system is not perfect.
“GNMT can still make significant errors that a human translator would never make, like dropping words and mistranslating proper names or rare terms, and translating sentences in isolation rather than considering the context of the paragraph or page,” Le and Schuster wrote. “There is still a lot of work we can do to serve our users better. However, GNMT represents a significant milestone.”
For much more detail, check out the entire academic paper, as well as today’s blog post.
VentureBeat's mission is to be a digital town square for technical decision-makers to gain knowledge about transformative enterprise technology and transact. Learn More