1565673003
The ECMAScript proposal “globalThis” by Jordan Harband provides a new standard way of accessing the global object.
JavaScript’s variable scopes are nested and form a tree whose root is the global scope. That scope is only the direct scope when a script runs in a web browser. There are two kinds of global variables:
The global object can be accessed via globalThis. The following HTML fragment demonstrates globalThis and the different kinds of global variables.
<script> const one = 1; var two = 2; </script> <script> // All scripts share the same top-level scope: console.log(one); // 1 console.log(two); // 2 // Not all declarations create properties of the global object: console.log(globalThis.one); // undefined console.log(globalThis.two); // 2 </script>
Note that each module has its own scope. Therefore, variables that exist at the top level of a module are not global. The following diagram illustrates how the various scopes are related.
Whenever there is no receiver (the object of a method call), the value of this depends on the current scope:
If you call eval() indirectly, it is executed in global scope, sloppily. Therefore, you can use the following code to get the global this:
const theGlobalThis = eval.call(undefined, 'this');
new Function() is also always evaluated in sloppy mode:
const theGlobalThis = new Function('return this')();
There is one important caveat, though: eval, new Function(), etc. are not available if you use CSP (Content Security Policy). That makes this approach unsuitable in many cases.
In browsers, the global this does not point directly to the global object
As an example, consider an iframe on a web page:
Browsers achieve that by distinguishing two objects:
In browsers, global this refers to the WindowProxy; everywhere else, it directly refers to the global object.
globalThis is the new standard way of accessing global this. Existing simple ways depend on the platform:
The proposal also standardizes that the global object must have Object.prototype in its prototype chain. The following is already true in web browsers today:
> Object.prototype.isPrototypeOf(window) true
The global object is now considered a mistake that JavaScript can’t get rid of, due to backward compatibility. It affects performance negatively and is generally confusing.
ECMAScript introduced several features that make it easier to avoid the global object – for example:
It is normally preferable to refer to global variables as variables and not as properties of globalThis. That has always worked on all JavaScript platforms.
Therefore, there are relatively few use cases for globalThis – for example:
The proposal’s author, Jordan Harband, has written a polyfill for globalThis.
Using it with CommonJS syntax:
// Computing the value of `global` var global = require('globalthis')();// Shimming
global
(installing it globally)
require(‘globalthis/shim’)();
Using it with ES6 module syntax:
// Computing the value ofglobal
import getGlobal from ‘globalthis’;
const global = getGlobal();// Shimming
global
(installing it globally)
import shim from ‘globalthis/shim’; shim();
The package always uses the “most native” approach available (global on Node.js etc., window in normal browser contexts, etc.).
Computing a reference to the global object
Internally, the polyfill uses the function getPolyfill() to compute a reference to the global object. This is how that is achieved:
var implementation = require(‘./implementation’);
module.exports = function getPolyfill() {
if (typeof global !== ‘object’ || !global
|| global.Math !== Math || global.Array !== Array) {
return implementation;
}
return global;
};
implementation.js:
if (typeof self !== ‘undefined’) {
module.exports = self;
} else if (typeof window !== ‘undefined’) {
module.exports = window;
} else if (typeof global !== ‘undefined’) {
module.exports = global;
} else {
module.exports = Function(‘return this’)();
}
Why not use global, self, or window everywhere?
Alas, that is not possible, because many JavaScript libraries use these variables to detect which platform they are running on.
What other names for globalThis were considered?
An issue in the proposal’s repository lists names that were considered and why they were rejected.
Thanks for reading ❤
If you liked this post, please do share/like it with all of your programming buddies!
Follow us on Facebook | Twitter
☞ The Complete JavaScript Course 2019: Build Real Projects!
☞ Vue JS 2 - The Complete Guide (incl. Vue Router & Vuex)
☞ JavaScript Bootcamp - Build Real World Applications
☞ JavaScript Programming Tutorial - Full JavaScript Course for Beginners
☞ New ES2019 Features Every JavaScript Developer Should Know
☞ Best JavaScript Frameworks, Libraries and Tools to Use in 2019
☞ React vs Angular vs Vue.js by Example
☞ Microfrontends — Connecting JavaScript frameworks together (React, Angular, Vue etc)
☞ Creating Web Animations with Anime.js
☞ Ember.js vs Vue.js - Which is JavaScript Framework Works Better for You
☞ Do we still need JavaScript frameworks?
#javascript #es6 #web-development
1604178900
As artificial intelligence (AI) models, especially those using deep learning, have gained prominence over the last eight or so years [8], they are now significantly impacting society, ranging from loan decisions to self-driving cars. Inherently though, a majority of these models are opaque, and hence following their recommendations blindly in human critical applications can raise issues such as fairness, safety, reliability, along with many others. This has led to the emergence of a subfield in AI called explainable AI (XAI) [7]. XAI is primarily concerned with understanding or interpreting the decisions made by these opaque or black-box models so that one can appropriate trust, and in some cases, have even better performance through human-machine collaboration [5].
While there are multiple views on what XAI is [12] and how explainability can be formalized [4, 6], it is still unclear as to what XAI truly is and why it is hard to formalize mathematically. The reason for this lack of clarity is that not only must the model and/or data be considered but also the final consumer of the explanation. Most XAI methods [11, 9, 3], given this intermingled view, try to meet all these requirements at the same time. For example, many methods try to identify a sparse set of features that replicate the decision of the model. The sparsity is a proxy for the consumer’s mental model. An important question asks whether we can disentangle the steps that XAI methods are trying to accomplish? This may help us better understand the truly challenging parts as well as the simpler parts of XAI, not to mention it may motivate different types of methods.
We conjecture that the XAI process can be broadly disentangled into two parts, as depicted in Figure 1. The first part is uncovering what is truly happening in the model that we want to understand, while the second part is about conveying that information to the user in a consumable way. The first part is relatively easy to formalize as it mainly deals with analyzing how well a simple proxy model might generalize either locally or globally with respect to (w.r.t.) data that is generated using the black-box model. Rather than having generalization guarantees w.r.t. the underlying distribution, we now want them w.r.t. the (conditional) output distribution of the model. Once we have some way of figuring out what is truly important, a second step is to communicate this information. This second part is much less clear as we do not have an objective way of characterizing an individual’s mind. This part, we believe, is what makes explainability as a whole so challenging to formalize. A mainstay for a lot of XAI research over the last year or so has been to conduct user studies to evaluate new XAI methods.
#overviews #ai #explainability #explainable ai #xai
1565673003
The ECMAScript proposal “globalThis” by Jordan Harband provides a new standard way of accessing the global object.
JavaScript’s variable scopes are nested and form a tree whose root is the global scope. That scope is only the direct scope when a script runs in a web browser. There are two kinds of global variables:
The global object can be accessed via globalThis. The following HTML fragment demonstrates globalThis and the different kinds of global variables.
<script> const one = 1; var two = 2; </script> <script> // All scripts share the same top-level scope: console.log(one); // 1 console.log(two); // 2 // Not all declarations create properties of the global object: console.log(globalThis.one); // undefined console.log(globalThis.two); // 2 </script>
Note that each module has its own scope. Therefore, variables that exist at the top level of a module are not global. The following diagram illustrates how the various scopes are related.
Whenever there is no receiver (the object of a method call), the value of this depends on the current scope:
If you call eval() indirectly, it is executed in global scope, sloppily. Therefore, you can use the following code to get the global this:
const theGlobalThis = eval.call(undefined, 'this');
new Function() is also always evaluated in sloppy mode:
const theGlobalThis = new Function('return this')();
There is one important caveat, though: eval, new Function(), etc. are not available if you use CSP (Content Security Policy). That makes this approach unsuitable in many cases.
In browsers, the global this does not point directly to the global object
As an example, consider an iframe on a web page:
Browsers achieve that by distinguishing two objects:
In browsers, global this refers to the WindowProxy; everywhere else, it directly refers to the global object.
globalThis is the new standard way of accessing global this. Existing simple ways depend on the platform:
The proposal also standardizes that the global object must have Object.prototype in its prototype chain. The following is already true in web browsers today:
> Object.prototype.isPrototypeOf(window) true
The global object is now considered a mistake that JavaScript can’t get rid of, due to backward compatibility. It affects performance negatively and is generally confusing.
ECMAScript introduced several features that make it easier to avoid the global object – for example:
It is normally preferable to refer to global variables as variables and not as properties of globalThis. That has always worked on all JavaScript platforms.
Therefore, there are relatively few use cases for globalThis – for example:
The proposal’s author, Jordan Harband, has written a polyfill for globalThis.
Using it with CommonJS syntax:
// Computing the value of `global` var global = require('globalthis')();// Shimming
global
(installing it globally)
require(‘globalthis/shim’)();
Using it with ES6 module syntax:
// Computing the value ofglobal
import getGlobal from ‘globalthis’;
const global = getGlobal();// Shimming
global
(installing it globally)
import shim from ‘globalthis/shim’; shim();
The package always uses the “most native” approach available (global on Node.js etc., window in normal browser contexts, etc.).
Computing a reference to the global object
Internally, the polyfill uses the function getPolyfill() to compute a reference to the global object. This is how that is achieved:
var implementation = require(‘./implementation’);
module.exports = function getPolyfill() {
if (typeof global !== ‘object’ || !global
|| global.Math !== Math || global.Array !== Array) {
return implementation;
}
return global;
};
implementation.js:
if (typeof self !== ‘undefined’) {
module.exports = self;
} else if (typeof window !== ‘undefined’) {
module.exports = window;
} else if (typeof global !== ‘undefined’) {
module.exports = global;
} else {
module.exports = Function(‘return this’)();
}
Why not use global, self, or window everywhere?
Alas, that is not possible, because many JavaScript libraries use these variables to detect which platform they are running on.
What other names for globalThis were considered?
An issue in the proposal’s repository lists names that were considered and why they were rejected.
Thanks for reading ❤
If you liked this post, please do share/like it with all of your programming buddies!
Follow us on Facebook | Twitter
☞ The Complete JavaScript Course 2019: Build Real Projects!
☞ Vue JS 2 - The Complete Guide (incl. Vue Router & Vuex)
☞ JavaScript Bootcamp - Build Real World Applications
☞ JavaScript Programming Tutorial - Full JavaScript Course for Beginners
☞ New ES2019 Features Every JavaScript Developer Should Know
☞ Best JavaScript Frameworks, Libraries and Tools to Use in 2019
☞ React vs Angular vs Vue.js by Example
☞ Microfrontends — Connecting JavaScript frameworks together (React, Angular, Vue etc)
☞ Creating Web Animations with Anime.js
☞ Ember.js vs Vue.js - Which is JavaScript Framework Works Better for You
☞ Do we still need JavaScript frameworks?
#javascript #es6 #web-development
1595847420
Let’s understand why an explainable AI is making lot of fuss nowadays. Consider an example a person(consumer) Mr. X goes to bank for a personal loan and bank takes his demographic details, credit bureau details and last 6 month bank statement. After taking all the documents bank runs this on their production deployed machine Learning Model for checking whether this person will default on loan or not.
A complex ML model which is deployed on their production says that this person has 55% chances of getting default on his loan and subsequently bank rejects Mr. X personal loan application.
Now Mr X is very angry and puzzled about his application rejection. So he went to bank manager for the explanation why his personal loan application got rejected. He looks his application and got puzzled that his application is good for granting a loan but why model has predicted false. This chaos has created doubt in manager’s mind about each loan that was previously rejected by the machine learning model. Although accuracy of the model is more than 98% percentage. But still it fails to gain the trust.
Every data scientist wants to deploy model on production which has highest accuracy in prediction of output. Below is the graph shown between interpretation and accuracy of the model.
Interpreability Vs Accuracy of the Model
If you notice the increasing the accuracy of the model the interpreability of the model decrease significantly and that obstructs complex model to be used in production.
This is where Explainable AI rescue us. In Explainable AI does not only predict the outcome it also explain the process and features included to reach at the conclusion. Isn’t great right that model is explaining itself.
ML and AI application has reached to almost in each industry like Banking & Finance, Healthcare, Manufacturing, E commerce, etc. But still people are afraid to use the complex model in their field just because of they think that the complex machine learning model are black box and will not able to explain the output to businesses and stakeholders. I hope until now you have understood why Explainable AI is required for better and efficient use of machine learning and deep learning models.
Now, Let’s understand what is Explainable AI and How does it works ?
Explainable AI is set of tools and methods in Artificial Intelligence (AI) to explain the model output process that how an model has reached to particular output for a given data points.
Consider the above example where Mr. X loan has rejected and Bank Manager is not able to figure out why his application got rejected.Here an explainable can give the important features and their importance considered by the model to reach at this output. So now Manager has his report,
#explainable-ai #explainability #artificial-intelligence #machine-learning-ai #machine-learning #deep learning
1598545220
With ML models serving real people, misclassified cases (which are a natural consequence of using ML) are affecting peoples’ lives and sometimes treating them very unfairly. It makes the ability to explain your models’ predictions a requirement rather than just a nice to have.
Machine learning model development is hard, especially in the real world.
Typically, you need to:
And that is not all.
You should have the experiments you run and models you train versioned in case you or anyone else needs to inspect them or reproduce the results in the future. From my experience, this moment comes when you least expect it and the feeling of “I wish I had thought about it before” is so very real (and painful).
But there is even more.
#2020 aug tutorials # overviews #explainability #explainable ai #interpretability #python #shap
1597583133
The session “Explainable AI For Computer Vision” was presented at the first of its kind Computer Vision conference, CVDC 2020 by Avni Gupta, who is the Technology Lead at Synduit. Organised by the Association of Data Scientists (ADaSCi), the premier global professional body of data science and machine learning professionals, it is a first-of-its-kind virtual conference on Computer Vision.
The primary aspect of the talk is computer vision models most of the time act as a black box and it is hard to explain what is actually going behind the models or how the outcomes are coming from. She also mentioned some of the important libraries which can help to make explainable AI possible in Computer Vision models.
According to Gupta, many a time, when developers create a computer vision model, they find themselves interacting with a backbox and unaware of what feature extraction is happening at each layer. With the help of explainable AI, it becomes easier to comprehend and know when enough layers have been added and what feature extraction has taken place at each layer.
#developers corner #black box in ai #computer vision model #explainability in ai #explainable ai #ml models