Recently, the developers at Microsoft detailed the Turing multilingual language model (T-ULRv2) and announced that the AI model has achieved the top rank at the Google XTREME public leaderboard.

The Cross-lingual TRansfer Evaluation of Multilingual Encoders, also known as XTREME benchmark includes 40 typologically diverse languages, which span 12 language families. XTREME also consists of nine tasks that require reasoning about different levels of syntax as well as semantics.

The Turing multilingual language model (T-ULRv2) is created by the Microsoft Turing team in collaboration with Microsoft Research. The model is also known to beat the previous best from Alibaba (VECO) by 3.5 points in average score.


Saurabh Tiwary, Vice President & Distinguished Engineer at Microsoft mentioned that in order to achieve this milestone, the team leveraged StableTune, which is a multilingual fine-tuning technique based on stability training along with the pre-trained model. The other popular language models on the XTREME leaderboard include XLM-R, mBERT, XLM, among others. Ming Zhou, Assistant Managing Director at Microsoft Research Asia, stated in a blog post that the Microsoft Turing team has long believed that language representation should be universal. Also, this kind of approach would allow for the trained model to be fine-tuned in one language and applied to a different one in a zero-shot fashion.

For a few years now, unsupervised pre-trained language modelling has become the backbone of all-natural language processing (NLP) models, with transformer-based models at the heart of all such innovation. According to Zhou, this type of models has the capability to overcome the challenge of requiring labelled data to train the model in every language.

How T-ULRv2 Works

The Turing multilingual language model (T-ULRv2) model is the latest cross-lingual innovation at the tech giant. It incorporates the InfoXLM (Information-Theoretic Framework for Cross-Lingual Language Model Pre-Training),which is a cross-lingual pre-trained model for language understanding and generation to create a universal model that represents 94 languages in the same vector space.

TT-ULRv2 is a transformer architecture with 24 layers and 1,024 hidden states. The architecture also includes a total of 550 million parameters. The pre-training of this model includes three different tasks, which are multilingual masked language modelling (MMLM), translation language modelling (TLM) and cross-lingual contrast (XLCo).


#developers corner #google xtreme #microsoft #microsoft ai #microsoft ai model #microsoft turing nlg #t-ulrv2 model #turing multilingual language model

Microsoft’s Turing Language Model Can Now Interpret 94 Languages
2.70 GEEK