Skip to content
The Future

A.I. is translating messages of long-lost languages

MIT and Google researchers use deep learning to decipher ancient languages.
Key Takeaways
  • Researchers from MIT and Google Brain discover how to use deep learning to decipher ancient languages.
  • The technique can be used to read languages that died long ago.
  • The method builds on the ability of machines to quickly complete monotonous tasks.
Sign up for Smart Faster newsletter
The most counterintuitive, surprising, and impactful new stories delivered to your inbox every Thursday.

There are about 6,500-7,000 languages currently spoken in the world. But that’s less than a quarter of all the languages people spoke over the course of human history. That total number is around 31,000 languages, according to some linguistic estimates. Every time a language is lost, so goes that way of thinking, of relating to the world. The relationships, the poetry of life uniquely described through that language are lost too. But what if you could figure out how to read the dead languages? Researchers from MIT and Google Brain created an AI-based system that can accomplish just that.

While languages change, many of the symbols and how the words and characters are distributed stay relatively constant over time. Because of that, you could attempt to decode a long-lost language if you understood its relationship to a known progenitor language. This insight is what allowed the team which included Jiaming Luo and Regina Barzilay from MIT and Yuan Cao from Google’s AI lab to use machine learning to decipher the early Greek language Linear B (from 1400 BC) and a cuneiform Ugaritic (early Hebrew) language that’s also over 3,000 years old.

Linear B was previously cracked by a human – in 1953, it was deciphered by Michael Ventris. But this was the first time the language was figured out by a machine.

The approach by the researchers focused on 4 key properties related to the context and alignment of the characters to be deciphered – distributional similarity, monotonic character mapping, structural sparsity and significant cognate overlap.

They trained the AI network to look for these traits, achieving the correct translation of 67.3% of Linear B cognates (word of common origin) into their Greek equivalents.

What AI can potentially do better in such tasks, according to MIT Technology Review, is that it can simply take a brute force approach that would be too exhausting for humans. They can attempt to translate symbols of an unknown alphabet by quickly testing it against symbols from one language after another, running them through everything that is already known.

Next for the scientists? Perhaps the translation of Linear A – the Ancient Greek language that no one has succeeded in deciphering so far.

You can check out their paper “Neural Decipherment via Minimum-Cost Flow: from Ugaritic to Linear B” here.

Noam Chomsky on Language’s Great Mysteries
Noam Chomsky on Language’s Great Mysteries

Noam Chomsky contemplates the basic, yet still unanswerable, questions of linguistics.

Sign up for Smart Faster newsletter
The most counterintuitive, surprising, and impactful new stories delivered to your inbox every Thursday.

Related

Up Next