Tyshawn  Braun

Tyshawn Braun

1604058300

On gender bias in word embeddings

The Natural Language Processing (NLP) group at Stanford University made publicly available the list of papers from their CS 384 seminar on Ethics and Social Issues in Natural Language Processing, and so I have been on a bit of a reading binge trying to learn more about this fascinating and important topic.

In this article, I want to explore the use of analogies for identifying biases in word embeddings by focusing on two papers on the topic: Man is to Computer Programmer as Woman is to Homemaker? Debiasing Word Embeddings (2016) [1] and Fair Is Better than Sensational: Man Is to Doctor as Woman Is to Doctor (2020) [2]. The first, which I will refer to as “the paper on debiasing,” is from the Stanford NLP list, and the second, referred to as “the paper on fairness,” is available through the Computational Linguistics journal of MIT Press Journals.

But first things first.

What is a word embedding?

A word embedding is a vector representation of a word that can be used to convey the meaning of the word to a computer. Therefore, with a word embedding, an algorithm can take as input a numerical representation of a word, rather than simply relying on counts of words.

Word embeddings have been researched in some depth for machine learning applications, probably because they have some interesting (and perhaps unexpected) properties: (1) semantically similar words tend to have vectors that are close to each other in the vector space, and (2) the differences between word embeddings tend to produce vectors representing the difference in meaning between words (e.g., king − man + woman = queen).

Such differences between words can also be described using analogies (e.g., man is to (:) king as (::) woman is to (:) queen), and it seems as if many researchers have had their share of fun using word embeddings to fill in analogies (e.g., man : king :: woman : x). However, while it may be interesting to set up an analogy to see which word is selected by the algorithm to replace X, such research can find dangerous biases that exist within our language.

#articial-intelligence #ai #language #data-science #nlp

What is GEEK

Buddha Community

On gender bias in word embeddings
Tyshawn  Braun

Tyshawn Braun

1599498688

How to Remove Gender Bias in Machine Learning Models: NLP and Word Embeddings

Most word embeddings used are glaringly sexist, let us look at some ways to de-bias such embeddings.

_Note - _This article provides a review and the arguments made by Bolukbasi et al. in the paper “Man is to Computer Programmer as Woman is to Homemaker? Debiasing Word Embeddings”. All graphical drawings are made using draw.io.

Word Embeddings are the core of NLP applications, and often, they end up being biased towards a gender due to the inherent stereotype present in the large text corpora they are trained on. Such models, when deployed to production can result in further widening of gender inequality and can have far fetched consequences on our society as a whole.

To get a gist of what I’m talking about, here is a snippet from Bolukbasi et al., 2016 “Man is to Computer Programmer as Woman is to Homemaker? Debiasing Word Embeddings”,

"As an example, suppose the search query is “cmu computer science phd student” for a computer science Ph.D. student at Carnegie Mellon University. Now, the directory offers 127 nearly identical web pages for students — these pages differ only in the names of the students. …

However, word embeddings also rank terms related to computer science closer to male names than female names. The consequence is that, between two pages that differ only in the names Mary and John, the word embedding would influence the search engine to rank John’s web page higher than Mary."

So, what is a Word Embedding?

Word Embeddings are a form of representation of vocabulary. They are vector representative of words, where the spatial closeness determines the similarity or context between words.

For reference, here are four words represented by vectors. As expected, ‘Dog’ and ‘Cat’ are closer to each other since both of them represent animals, whereas, ‘Mango’ and ‘Apple’ are closer to each other since they represent fruits. On the contrary, both the groups are far away from each other since they are not similar to each other.

In this diagram, the vector is two-dimensional for easier visualization, however, most Word Embedding models like Word2Vec, GloVe, etc. are of several hundred dimensions. For this article, we’ll be using Word2Vec for all the examples.

#deep-learning #gender-bias #word-embeddings #machine-learning #artificial-intelligence #hackernoon-top-story

Lane  Sanford

Lane Sanford

1596729180

Word Embedding Fairness Evaluation

Word embeddings are dense vector representations of words trained from document corpora. They have become a core component of natural language processing (NLP) downstream systems because of their ability to efficiently capture semantic and syntactic relationships between words. A widely reported shortcoming of word embeddings is that they are prone to inherit stereotypical social biases exhibited in the corpora on which they are trained.

The problem of how to quantify the mentioned biases is currently an active area of research, and several different fairness metrics have been proposed in the literature in the past few years.

Although all metrics have a similar objective, the relationship between them is by no means clear. Two issues that prevent a clean comparison is that they operate with different inputs (pairs of words, sets of words, multiple sets of words, and so on) and that their outputs are incompatible with each other (reals, positive numbers,  range, etc.). This leads to a lack of consistency between them, which causes several problems when trying to compare and validate their results.

We propose the Word Embedding Fairness Evaluation (WEFE) as a framework for measuring fairness in word embeddings, and we released its implementation as an open-source library.

Framework

We propose an abstract view of a fairness metric as a function that receives queries as input, with each query formed by a target and attribute words. The target words describe the social groups in which fairness is intended to be measured (e.g., women, white people, Muslims), and the attribute words describe traits or attitudes by which a bias towards one of the social groups may be exhibited (e.g., pleasant vs. unpleasant terms). For more details on the framework, you can read our recently accepted paper IJCAI paper [1].

WEFE implements the following metrics:

  • Word Embedding Association Test (WEAT)
  • Relative Norm Distance (RND)
  • Relative Negative Sentiment Bias (RNSB)
  • Mean Average Cosine (MAC)

#bias #ethics #machine learning #word embeddings

Elton  Bogan

Elton Bogan

1596743040

Word Embeddings Versus Bag-of-Words: The Curious Case of Recommender Systems

Are word embeddings always the best choice?

If you can challenge a well-accepted view in data science with data, that’s pretty cool, right? After all, “in data we trust”, or so we profess! Word embeddings have caused a revolution in the world of natural language processing, as a result of which we are much closer to understanding the meaning and context of text and transcribed speech today. It is a world apart from the good old bag-of-words (BoW) models, which rely on frequencies of words under the unrealistic assumption that each word occurs independently of all others. The results have been nothing short of spectacular with word embeddings, which create a vector for every word. One of the oft used success stories of word embeddings involves subtracting the man vector from the king vector and adding the woman vector, which returns the queen vector:

Image for post

Very smart indeed! However, I raise the question whether word embeddings should always be preferred to bag-of-words. In building a review-based recommender system, it dawned on me that while word embeddings are incredible, they may not be the most suitable technique for my purpose. As crazy as it may sound, I got better results with the BoW approach. In this article, I show that the uber-smart feature of word embeddings in being able to understand related words actually turns out to be a shortcoming in making better product recommendations.

Word embeddings in a jiffy

Simply stated, word embeddings consider each word in its context; for example, in the word2vec approach, a popular technique developed by Tomas Mikolov and colleagues at Google, for each word, we generate a vector of words with a large number of dimensions. Using neural networks, the vectors are created by predicting for each word what its neighboring words may be. Multiple Python libraries like spaCy and gensim have built-in word vectors; so, while word embeddings have been criticized in the past on grounds of complexity, we don’t have to write the code from scratch. Unless you want to dig into the math of one-hot-encoding, neural nets and complex stuff, using word vectors today is as simple as using BoW. After all, you don’t need to know the theory of internal combustion engines to drive a car!

#cosine-similarity #bag-of-words #python #word-embeddings #recommendation-system

Larry  Kessler

Larry Kessler

1617422160

Top 5 Inductive Biases In Deep Learning Models

The learning algorithms mostly use some mechanisms or assumptions by either putting some restrictions on the space of hypotheses or can be said as the underlying model space. This mechanism is known as the Inductive Bias or Learning Bias.

This mechanism encourages the learning algorithms to prioritise solutions with specific properties. In simple words, learning bias or inductive bias is a set of implicit or explicit assumptions made by the machine learning algorithms to generalise a set of training data.

Here, we have compiled a list of five interesting inductive biases, in no particular order, which are used in deep learning.

#developers corner #ai biases #algorithm biases #deep learning #inductive bias deep learning #inductive biases

Daron  Moore

Daron Moore

1592138820

Five Cognitive Biases In Data Science (And how to avoid them)

Recently, I was reading Rolf Dobell’s The Art of Thinking Clearly, which made me think about cognitive biases in a way I never had before. I realized how deeply seated some cognitive biases are. In fact, we often don’t even consciously realize when our thinking is being affected by one. For data scientists, these biases can really change the way we work with data and make our day-to-day decisions, and generally not for the better.

As data scientists, our job is to make sense of the facts. In carrying out this analysis, we have to make subjective decisions, though. So even though we work with hard facts and data, there’s a strong interpretive component to data science.

As a result, we data scientists need to be extremely careful, because all humans are very much susceptible to cognitive biases. We’re no exception. In fact, I have seen many instances where data scientists ended up making decisions based on pre-existing beliefs, limited data or just irrational preferences.

#advice #bias #cognitive bias #confirmation bias #data science