Microsoft’s Turing Language Model Can Now Interpret 94 Languages

The developers at Microsoft detailed the #Turing multilingual language model (T-ULRv2) and announced that the #AI model has achieved the top rank at the Google XTREME public leaderboard.
https://zcu.io/XP3k

#deep-learning #nlp #microsoft #google #ai

What is GEEK

Buddha Community

Microsoft’s Turing Language Model Can Now Interpret 94 Languages
Mikel  Okuneva

Mikel Okuneva

1603785600

Microsoft’s Turing Language Model Can Now Interpret 94 Languages

Recently, the developers at Microsoft detailed the Turing multilingual language model (T-ULRv2) and announced that the AI model has achieved the top rank at the Google XTREME public leaderboard.

The Cross-lingual TRansfer Evaluation of Multilingual Encoders, also known as XTREME benchmark includes 40 typologically diverse languages, which span 12 language families. XTREME also consists of nine tasks that require reasoning about different levels of syntax as well as semantics.

The Turing multilingual language model (T-ULRv2) is created by the Microsoft Turing team in collaboration with Microsoft Research. The model is also known to beat the previous best from Alibaba (VECO) by 3.5 points in average score.


Saurabh Tiwary, Vice President & Distinguished Engineer at Microsoft mentioned that in order to achieve this milestone, the team leveraged StableTune, which is a multilingual fine-tuning technique based on stability training along with the pre-trained model. The other popular language models on the XTREME leaderboard include XLM-R, mBERT, XLM, among others. Ming Zhou, Assistant Managing Director at Microsoft Research Asia, stated in a blog post that the Microsoft Turing team has long believed that language representation should be universal. Also, this kind of approach would allow for the trained model to be fine-tuned in one language and applied to a different one in a zero-shot fashion.

For a few years now, unsupervised pre-trained language modelling has become the backbone of all-natural language processing (NLP) models, with transformer-based models at the heart of all such innovation. According to Zhou, this type of models has the capability to overcome the challenge of requiring labelled data to train the model in every language.

How T-ULRv2 Works

The Turing multilingual language model (T-ULRv2) model is the latest cross-lingual innovation at the tech giant. It incorporates the InfoXLM (Information-Theoretic Framework for Cross-Lingual Language Model Pre-Training),which is a cross-lingual pre-trained model for language understanding and generation to create a universal model that represents 94 languages in the same vector space.

TT-ULRv2 is a transformer architecture with 24 layers and 1,024 hidden states. The architecture also includes a total of 550 million parameters. The pre-training of this model includes three different tasks, which are multilingual masked language modelling (MMLM), translation language modelling (TLM) and cross-lingual contrast (XLCo).


#developers corner #google xtreme #microsoft #microsoft ai #microsoft ai model #microsoft turing nlg #t-ulrv2 model #turing multilingual language model

Mike  Kozey

Mike Kozey

1656151740

Test_cov_console: Flutter Console Coverage Test

Flutter Console Coverage Test

This small dart tools is used to generate Flutter Coverage Test report to console

How to install

Add a line like this to your package's pubspec.yaml (and run an implicit flutter pub get):

dev_dependencies:
  test_cov_console: ^0.2.2

How to run

run the following command to make sure all flutter library is up-to-date

flutter pub get
Running "flutter pub get" in coverage...                            0.5s

run the following command to generate lcov.info on coverage directory

flutter test --coverage
00:02 +1: All tests passed!

run the tool to generate report from lcov.info

flutter pub run test_cov_console
---------------------------------------------|---------|---------|---------|-------------------|
File                                         |% Branch | % Funcs | % Lines | Uncovered Line #s |
---------------------------------------------|---------|---------|---------|-------------------|
lib/src/                                     |         |         |         |                   |
 print_cov.dart                              |  100.00 |  100.00 |   88.37 |...,149,205,206,207|
 print_cov_constants.dart                    |    0.00 |    0.00 |    0.00 |    no unit testing|
lib/                                         |         |         |         |                   |
 test_cov_console.dart                       |    0.00 |    0.00 |    0.00 |    no unit testing|
---------------------------------------------|---------|---------|---------|-------------------|
 All files with unit testing                 |  100.00 |  100.00 |   88.37 |                   |
---------------------------------------------|---------|---------|---------|-------------------|

Optional parameter

If not given a FILE, "coverage/lcov.info" will be used.
-f, --file=<FILE>                      The target lcov.info file to be reported
-e, --exclude=<STRING1,STRING2,...>    A list of contains string for files without unit testing
                                       to be excluded from report
-l, --line                             It will print Lines & Uncovered Lines only
                                       Branch & Functions coverage percentage will not be printed
-i, --ignore                           It will not print any file without unit testing
-m, --multi                            Report from multiple lcov.info files
-c, --csv                              Output to CSV file
-o, --output=<CSV-FILE>                Full path of output CSV file
                                       If not given, "coverage/test_cov_console.csv" will be used
-t, --total                            Print only the total coverage
                                       Note: it will ignore all other option (if any), except -m
-p, --pass=<MINIMUM>                   Print only the whether total coverage is passed MINIMUM value or not
                                       If the value >= MINIMUM, it will print PASSED, otherwise FAILED
                                       Note: it will ignore all other option (if any), except -m
-h, --help                             Show this help

example run the tool with parameters

flutter pub run test_cov_console --file=coverage/lcov.info --exclude=_constants,_mock
---------------------------------------------|---------|---------|---------|-------------------|
File                                         |% Branch | % Funcs | % Lines | Uncovered Line #s |
---------------------------------------------|---------|---------|---------|-------------------|
lib/src/                                     |         |         |         |                   |
 print_cov.dart                              |  100.00 |  100.00 |   88.37 |...,149,205,206,207|
lib/                                         |         |         |         |                   |
 test_cov_console.dart                       |    0.00 |    0.00 |    0.00 |    no unit testing|
---------------------------------------------|---------|---------|---------|-------------------|
 All files with unit testing                 |  100.00 |  100.00 |   88.37 |                   |
---------------------------------------------|---------|---------|---------|-------------------|

report for multiple lcov.info files (-m, --multi)

It support to run for multiple lcov.info files with the followings directory structures:
1. No root module
<root>/<module_a>
<root>/<module_a>/coverage/lcov.info
<root>/<module_a>/lib/src
<root>/<module_b>
<root>/<module_b>/coverage/lcov.info
<root>/<module_b>/lib/src
...
2. With root module
<root>/coverage/lcov.info
<root>/lib/src
<root>/<module_a>
<root>/<module_a>/coverage/lcov.info
<root>/<module_a>/lib/src
<root>/<module_b>
<root>/<module_b>/coverage/lcov.info
<root>/<module_b>/lib/src
...
You must run test_cov_console on <root> dir, and the report would be grouped by module, here is
the sample output for directory structure 'with root module':
flutter pub run test_cov_console --file=coverage/lcov.info --exclude=_constants,_mock --multi
---------------------------------------------|---------|---------|---------|-------------------|
File                                         |% Branch | % Funcs | % Lines | Uncovered Line #s |
---------------------------------------------|---------|---------|---------|-------------------|
lib/src/                                     |         |         |         |                   |
 print_cov.dart                              |  100.00 |  100.00 |   88.37 |...,149,205,206,207|
lib/                                         |         |         |         |                   |
 test_cov_console.dart                       |    0.00 |    0.00 |    0.00 |    no unit testing|
---------------------------------------------|---------|---------|---------|-------------------|
 All files with unit testing                 |  100.00 |  100.00 |   88.37 |                   |
---------------------------------------------|---------|---------|---------|-------------------|
---------------------------------------------|---------|---------|---------|-------------------|
File - module_a -                            |% Branch | % Funcs | % Lines | Uncovered Line #s |
---------------------------------------------|---------|---------|---------|-------------------|
lib/src/                                     |         |         |         |                   |
 print_cov.dart                              |  100.00 |  100.00 |   88.37 |...,149,205,206,207|
lib/                                         |         |         |         |                   |
 test_cov_console.dart                       |    0.00 |    0.00 |    0.00 |    no unit testing|
---------------------------------------------|---------|---------|---------|-------------------|
 All files with unit testing                 |  100.00 |  100.00 |   88.37 |                   |
---------------------------------------------|---------|---------|---------|-------------------|
---------------------------------------------|---------|---------|---------|-------------------|
File - module_b -                            |% Branch | % Funcs | % Lines | Uncovered Line #s |
---------------------------------------------|---------|---------|---------|-------------------|
lib/src/                                     |         |         |         |                   |
 print_cov.dart                              |  100.00 |  100.00 |   88.37 |...,149,205,206,207|
lib/                                         |         |         |         |                   |
 test_cov_console.dart                       |    0.00 |    0.00 |    0.00 |    no unit testing|
---------------------------------------------|---------|---------|---------|-------------------|
 All files with unit testing                 |  100.00 |  100.00 |   88.37 |                   |
---------------------------------------------|---------|---------|---------|-------------------|

Output to CSV file (-c, --csv, -o, --output)

flutter pub run test_cov_console -c --output=coverage/test_coverage.csv

#### sample CSV output file:
File,% Branch,% Funcs,% Lines,Uncovered Line #s
lib/,,,,
test_cov_console.dart,0.00,0.00,0.00,no unit testing
lib/src/,,,,
parser.dart,100.00,100.00,97.22,"97"
parser_constants.dart,100.00,100.00,100.00,""
print_cov.dart,100.00,100.00,82.91,"29,49,51,52,171,174,177,180,183,184,185,186,187,188,279,324,325,387,388,389,390,391,392,393,394,395,398"
print_cov_constants.dart,0.00,0.00,0.00,no unit testing
All files with unit testing,100.00,100.00,86.07,""

Installing

Use this package as an executable

Install it

You can install the package from the command line:

dart pub global activate test_cov_console

Use it

The package has the following executables:

$ test_cov_console

Use this package as a library

Depend on it

Run this command:

With Dart:

 $ dart pub add test_cov_console

With Flutter:

 $ flutter pub add test_cov_console

This will add a line like this to your package's pubspec.yaml (and run an implicit dart pub get):

dependencies:
  test_cov_console: ^0.2.2

Alternatively, your editor might support dart pub get or flutter pub get. Check the docs for your editor to learn more.

Import it

Now in your Dart code, you can use:

import 'package:test_cov_console/test_cov_console.dart';

example/lib/main.dart

import 'package:flutter/material.dart';

void main() {
  runApp(MyApp());
}

class MyApp extends StatelessWidget {
  // This widget is the root of your application.
  @override
  Widget build(BuildContext context) {
    return MaterialApp(
      title: 'Flutter Demo',
      theme: ThemeData(
        // This is the theme of your application.
        //
        // Try running your application with "flutter run". You'll see the
        // application has a blue toolbar. Then, without quitting the app, try
        // changing the primarySwatch below to Colors.green and then invoke
        // "hot reload" (press "r" in the console where you ran "flutter run",
        // or simply save your changes to "hot reload" in a Flutter IDE).
        // Notice that the counter didn't reset back to zero; the application
        // is not restarted.
        primarySwatch: Colors.blue,
        // This makes the visual density adapt to the platform that you run
        // the app on. For desktop platforms, the controls will be smaller and
        // closer together (more dense) than on mobile platforms.
        visualDensity: VisualDensity.adaptivePlatformDensity,
      ),
      home: MyHomePage(title: 'Flutter Demo Home Page'),
    );
  }
}

class MyHomePage extends StatefulWidget {
  MyHomePage({Key? key, required this.title}) : super(key: key);

  // This widget is the home page of your application. It is stateful, meaning
  // that it has a State object (defined below) that contains fields that affect
  // how it looks.

  // This class is the configuration for the state. It holds the values (in this
  // case the title) provided by the parent (in this case the App widget) and
  // used by the build method of the State. Fields in a Widget subclass are
  // always marked "final".

  final String title;

  @override
  _MyHomePageState createState() => _MyHomePageState();
}

class _MyHomePageState extends State<MyHomePage> {
  int _counter = 0;

  void _incrementCounter() {
    setState(() {
      // This call to setState tells the Flutter framework that something has
      // changed in this State, which causes it to rerun the build method below
      // so that the display can reflect the updated values. If we changed
      // _counter without calling setState(), then the build method would not be
      // called again, and so nothing would appear to happen.
      _counter++;
    });
  }

  @override
  Widget build(BuildContext context) {
    // This method is rerun every time setState is called, for instance as done
    // by the _incrementCounter method above.
    //
    // The Flutter framework has been optimized to make rerunning build methods
    // fast, so that you can just rebuild anything that needs updating rather
    // than having to individually change instances of widgets.
    return Scaffold(
      appBar: AppBar(
        // Here we take the value from the MyHomePage object that was created by
        // the App.build method, and use it to set our appbar title.
        title: Text(widget.title),
      ),
      body: Center(
        // Center is a layout widget. It takes a single child and positions it
        // in the middle of the parent.
        child: Column(
          // Column is also a layout widget. It takes a list of children and
          // arranges them vertically. By default, it sizes itself to fit its
          // children horizontally, and tries to be as tall as its parent.
          //
          // Invoke "debug painting" (press "p" in the console, choose the
          // "Toggle Debug Paint" action from the Flutter Inspector in Android
          // Studio, or the "Toggle Debug Paint" command in Visual Studio Code)
          // to see the wireframe for each widget.
          //
          // Column has various properties to control how it sizes itself and
          // how it positions its children. Here we use mainAxisAlignment to
          // center the children vertically; the main axis here is the vertical
          // axis because Columns are vertical (the cross axis would be
          // horizontal).
          mainAxisAlignment: MainAxisAlignment.center,
          children: <Widget>[
            Text(
              'You have pushed the button this many times:',
            ),
            Text(
              '$_counter',
              style: Theme.of(context).textTheme.headline4,
            ),
          ],
        ),
      ),
      floatingActionButton: FloatingActionButton(
        onPressed: _incrementCounter,
        tooltip: 'Increment',
        child: Icon(Icons.add),
      ), // This trailing comma makes auto-formatting nicer for build methods.
    );
  }
}

Author: DigitalKatalis
Source Code: https://github.com/DigitalKatalis/test_cov_console 
License: BSD-3-Clause license

#flutter #dart #test 

Microsoft’s Turing Language Model Can Now Interpret 94 Languages

The developers at Microsoft detailed the #Turing multilingual language model (T-ULRv2) and announced that the #AI model has achieved the top rank at the Google XTREME public leaderboard.
https://zcu.io/XP3k

#deep-learning #nlp #microsoft #google #ai

Top Microsoft big data solutions Companies | Best Microsoft big data Developers

An extensively researched list of top Microsoft big data analytics and solution with ratings & reviews to help find the best Microsoft big data solutions development companies around the world.
An exclusive list of Microsoft Big Data consulting and solution providers, after examining various factors of expert big data analytics firms and found the equivalent matches that boast the ace qualities with proven fineness in data analytics. For business growth and enterprise acceleration getting inputs from the whole data of the organization have become necessary, thus we bring to you the most trustworthy Microsoft Big Data consultants and solutions providers for your assistance.
Let’s take a look at the List of Best Microsoft big data solutions Companies.

#microsoft big data solutions development companies #microsoft big data analytics and solution #microsoft big data consultants #microsoft big data developers #microsoft big data #microsoft big data solution providers

Uncovering the Magic: interpreting Machine Learning black-box models

**The trade-off between predictive power and interpretability **is a common issue to face when working with black-box models, especially in business environments where results have to be explained to non-technical audiences. Interpretability is crucial to being able to question, understand, and trust AI and ML systems. It also provides data scientists and engineers better means for debugging models and ensuring that they are working as intended.

Image for post

Motivation

This tutorial aims to present different techniques for approaching model interpretation in black-box models.

_Disclaimer: _this article seeks to introduce some useful techniques from the field of interpretable machine learning to the average data scientists and to motivate its adoption . Most of them have been summarized from this highly recommendable book from Christoph Molnar: Interpretable Machine Learning.

The entire code used in this article can be found in my GitHub

Contents

  1. Taxonomy of Interpretability Methods
  2. Dataset and Model Training
  3. Global Importance
  4. Local Importance

1. Taxonomy of Interpretability Methods

  • **Intrinsic or Post-Hoc? **This criteria distinguishes whether interpretability is achieved by restricting the complexity of the machine learning model (intrinsic) or by applying methods that analyze the model after training (post-hoc).
  • **Model-Specific or Model-Agnostic? **Linear models have a model-specific interpretation, since the interpretation of the regression weights are specific to that sort of models. Similarly, decision trees splits have their own specific interpretation. Model-agnostic tools, on the other hand, can be used on any machine learning model and are applied after the model has been trained (post-hoc).
  • **Local or Global? **Local interpretability refers to explaining an individual prediction, whereas global interpretability is related to explaining the model general behavior in the prediction task. Both types of interpretations are important and there are different tools for addressing each of them.

2. Dataset and Model Training

The dataset used for this article is the Adult Census Income from UCI Machine Learning Repository. The prediction task is to determine whether a person makes over $50K a year.

Since the focus of this article is not centered in the modelling phase of the ML pipeline, minimum feature engineering was performed in order to model the data with an XGBoost.

The performance metrics obtained for the model are the following:

Image for post

Fig. 1: Receiving Operating Characteristic (ROC) curves for Train and Test sets.

Image for post

Fig. 2: XGBoost performance metrics

The model’s performance seems to be pretty acceptable.

3. Global Importance

The techniques used to evaluate the global behavior of the model will be:

3.1 - Feature Importance (evaluated by the XGBoost model and by SHAP)

3.2 - Summary Plot (SHAP)

3.3 - Permutation Importance (ELI5)

3.4 - Partial Dependence Plot (PDPBox and SHAP)

3.5 - Global Surrogate Model (Decision Tree and Logistic Regression)

3.1 - Feature Importance

  • XGBoost (model-specific)
feat_importances = pd.Series(clf_xgb_df.feature_importances_, index=X_train.columns).sort_values(ascending=True)
feat_importances.tail(20).plot(kind='barh')

Image for post

Fig. 3: XGBoost Feature Importance

When working with XGBoost, one must be careful when interpreting features importances, since the results might be misleading. This is because the model calculates several importance metrics, with different interpretations. It creates an importance matrix, which is a table with the first column including the names of all the features actually used in the boosted trees, and the other with the resulting ‘importance’ values calculated with different metrics (Gain, Cover, Frequence). A more thourough explanation of these can be found here.

The **Gain **is the most relevant attribute to interpret the relative importance (i.e. improvement in accuracy) of each feature.

  • SHAP

In general, SHAP library is considered to be a model-agnostic tool for addressing interpretability (we will cover SHAP’s intuition in the Local Importance section). However, the library has a model-specific method for tree-based machine learning models such as decision trees, random forests and gradient boosted trees.

explainer = shap.TreeExplainer(clf_xgb_df)
shap_values = explainer.shap_values(X_test)

shap.summary_plot(shap_values, X_test, plot_type = 'bar')

Image for post

Fig. 4: SHAP Feature Importance

The XGBoost feature importance was used to evaluate the relevance of the predictors in the model’s outputs for the Train dataset and the SHAP one to evaluate it for Test dataset, in order to assess if the most important features were similar in both approaches and sets.

It is observed that the most important variables of the model are maintained, although in different order of importance (age seems to take much more relevance in the test set by SHAP approach).

3.2 Summary Plot (SHAP)

The SHAP Summary Plot is a very interesting plot to evaluate the features of the model, since it provides more information than the traditional Feature Importance:

  • Feature Importance: variables are sorted in descending order of importance.
  • Impact on Prediction: the position on the horizontal axis indicates whether the values of the dataset instances for each feature have more or less impact on the output of the model.
  • Original Value: the color indicates, for each feature, whether it is a high or low value (in the range of each of the feature).
  • Correlation: the correlation of a feature with the model output can be analyzed by evaluating its color (its range of values) and the impact on the horizontal axis. For example, it is observed that the age has a positive correlation with the target, since the impact on the output increases as the value of the feature increases.
shap.summary_plot(shap_values, X_test)

Image for post

Fig. 5: SHAP Summary Plot

3.3 - Permutation Importance (ELI5)

Another way to assess the global importance of the predictors is to randomly permute the order of the instances for each feature in the dataset and predict with the trained model. If by doing this disturbance in the order, the evaluation metric does not change substantially, then the feature is not so relevant. If instead the evaluation metric is affected, then the feature is considered important in the model. This process is done individually for each feature.

To evaluate the trained XGBoost model, the Area Under the Curve (AUC) of the ROC Curve will be used as the performance metric. Permutation Importance will be analyzed in both Train and Test:

# Train
perm = PermutationImportance(clf_xgb_df, scoring = 'roc_auc', random_state=1984).fit(X_train, y_train)
eli5.show_weights(perm, feature_names = X_train.columns.tolist())

# Test
perm = PermutationImportance(clf_xgb_df, scoring = 'roc_auc', random_state=1984).fit(X_test, y_test)
eli5.show_weights(perm, feature_names = X_test.columns.tolist())

Image for post

Fig. 6: Permutation Importance for Train and Test sets.

Even though the order of the most important features changes, it looks like that the most relevant ones remain the same. It is interesting to note that, unlike the XGBoost Feature Importance, the age variable in the Train set has a fairly strong effect (as showed by SHAP Feature Importance in the Test set). Furthermore, the 6 most important variables according to the Permutation Importance are kept in Train and Test (the difference in order may be due to the distribution of each sample).

The coherence between the different approaches to approximate the global importance generates more confidence in the interpretation of the model’s output.

#model-interpretability #model-fairness #interpretability #machine-learning #shapley-values #deep learning