Noah  Rowe

Noah Rowe

1593550320

How Are Algorithms Biased?

Algorithms do what they’re taught. Unfortunately, some are inadvertently taught prejudices and unethical biases by societal patterns hidden in the data.

After the end of the Second World War, the Nuremberg trials laid bare the atrocities conducted in medical research by the Nazis. In the aftermath of the trials, the medical sciences established a set of rules — The Nuremberg Code — to control future experiments involving human subjects. The Nuremberg Code has influenced medical codes of ethics around the world, as has the exposure of experiments that had failed to follow it even three decades later, such as the infamous Tuskegee syphilis experiment.

The direct negative impact of AI experiments and applications on users isn’t quite as inhumane as that of the Tuskegee and Nazi experimentations, but in the face of an overwhelming and growing body of evidence of algorithms being biased against certain demographic cohorts, it is important that a dialogue takes place sooner or later. AI systems can be biased based on who builds them, the way they are developed, and how they’re eventually deployed. This is known as algorithmic bias.

While the data sciences have not developed a Nuremberg Code of their own yet, the social implications of research in artificial intelligence are starting to be addressed in some curricula. But even as the debates are starting to sprout up, what is still lacking is a discipline-wide discussion to grapple with questions of how to tackle societal and historical inequities that are reinforced by AI algorithms.

We are flawed creatures. Every single decision we make involves a certain kind of bias. However, algorithms haven’t proven to be much better. Ideally, we would want our algorithms to make better-informed decisions devoid of biases so as to ensure better social justice, i.e., equal opportunities for individuals and groups (such as minorities) within society to access resources, have their voices heard, and be represented in society.

When these algorithms do the job of amplifying racial, social and gender inequality, instead of alleviating it; it becomes necessary to take stock of the ethical ramifications and potential malevolence of the technology.

This essay was motivated by two flashpoints : the racial inequality discussion that is now raging on worldwide, and Yann LeCun’s altercation with Timnit Gebru on Twitter which was caused due to a disagreement over a downsampled image of Barack Obama (left) that was depixelated to a picture of a white man (right) by a face upsampling machine learning (ML) model.

A depixelated picture of Obama upsampled to a picture of a caucasian male. (Courtesy : @hardmaru on Twitter)

The (rather explosive) argument was sparked by this tweet by LeCun where he says that the resulting face was that of a white man because of a bias in data that trained the algorithm. Gebru responded sharply that the harms of ML systems cannot be reduced to biased data.

#ai-bias #fairness #racism #artificial-intelligence #machine-learning

What is GEEK

Buddha Community

How Are Algorithms Biased?
Siphiwe  Nair

Siphiwe Nair

1624027140

Are there Biases in Big Data Algorithms. What can we do?

Big Data and Machine Learning appear to be the advanced buzzword answers for each issue. Sectors, for example, fraud prevention, healthcare, and sales are only a couple of the places that are thought to profit by self-learning and improving machines that can be trained on colossal datasets.

Notwithstanding, how cautiously do we examine these algorithms and research potential biases that could affect results?

Companies utilize different sorts of big data analytics to make decisions, correlations, and anticipate about their constituents or partners. The market for data is huge and developing quickly; it’s assessed to hit $100 billion before the decade’s end.

Data and data sets are not unbiased; they are manifestations of human design. We give numbers their voice, draw insights from them, and define their significance through our understandings. Hidden biases in both the analysis stages present extensive risks, and are as essential to the big-data equation as the numbers themselves.

While such complex datasets may contain important data on why customers decide to purchase certain items and not others, the scale and size of the available information makes it unworkable for an individual to analyse it and recognize any patterns present.

This is the reason machine learning is frequently regarded as the solution to the ‘Big Data Problem.’ Automation of the analysis is one way to deal with deconstructing such datasets, however regular algorithms should be pre-programmed to think about specific factors and search for specific levels of significance.

Algorithms of this sort have existed for quite a long time and a lot of the time are utilized by companies to have the option to scale their tasks, by utilizing repeatable patterns that can be applied to everybody.

This implies that, regardless of whether you’re keen on big data, algorithms, and tech, or not, you’re a part of this today, and it will influence you to an ever-increasing extent.

#big data #latest news #biases in big data algorithms #are there biases in big data algorithms. what can we do? #algorithms #web

Noah  Rowe

Noah Rowe

1596736980

New Zealand Has a Radical Idea for Fighting Algorithmic Bias

From car insurance quotes to which posts you see on social media, our online lives are guided by invisible, inscrutable algorithms. They help private companies and governments make decisions — or automate them altogether — using massive amounts of data. But despite how crucial they are to everyday life, most people don’t understand how algorithms use their data to make decisions, which means serious problems can go undetected. (Take, for example, research last year that showed anti-Black bias in a widely used algorithm that helps hospitals identify patients in need of extra medical care.)

How Bias Ruins A.I.

In wake of Banjo CEO revelations, bias in A.I. comes under new scrutiny

onezero.medium.com

The New Zealand government has a plan to address this problem with what officials are calling the world’s first algorithm charter: a set of rules and principles for government agencies to follow when implementing algorithms that allow people to peek under the hood. By leading the way with responsible algorithm oversight, New Zealand hopes to set a model for other countries by demonstrating the value of transparency about how algorithms affect daily life.

Agencies that sign the charter make a number of commitments. For instance, they agree to publicly disclose in “plain English” when and how algorithms are used, ensure their algorithms do not perpetuate bias, and allow for a peer review to avoid “unintended consequences.”

The commitment also requires that the Te Ao Māori Indigenous perspective is included in the development of algorithms, as well as their use, and asks that agencies provide a point of contact that members of the public can use to inquire about algorithms, as well as challenge any decision made by an algorithm.

Given that algorithms are used by all facets of government, from calculating unemployment payments to how police patrol a neighborhood and profile people who live there, providing insight into how those algorithms truly work will help hold governments accountable to keeping them fair.

The charter has a big list of signatories so far, including the Ministry of Education, Ministry for the Environment, Statistics New Zealand, the New Zealand Defence Force, and many more. Notably missing from the list are the country’s police force and spy agencies like the Government Communications Security Bureau.

Though these issues can sound technical, algorithms in government can have huge impacts on public life. The New York Times reported in early 2020 that algorithms are used in the United States to “set police patrols, prison sentences and probation rules,” and in the Netherlands, “an algorithm flagged welfare fraud risks.”

There is rarely a way to see what data was used to reach these decisions, such as whether or not the algorithm considered gender, zip code, age, or any other number of factors, let alone if the data used to train the algorithm was fairly deployed in the first place. This can lead to “bias by proxy,” where a certain variable is used to determine a given outcome without an actual connection; for example, measuring a teacher’s effectiveness according to students’ scores on standardized tests when other systemic factors might be at work.

study by ProPublicafound that this kind of bias is commonplace, with a study of an algorithm used to generate a risk score for people arrested by a police department. Not only was the formula likely to “falsely flag Black defendants as future criminals,” but the study also found that “white defendants were mislabeled as low risk more often than black defendants.”

In New Zealand, biased algorithms are a problem as well, with The Guardian reporting that one of the charter signatories, the country’s Accident Compensation Authority, “was criticised in 2017 for using algorithms to detect fraud among those on its books.” Similar concerns about the correction agency and immigration authority have been raised in the past, both of which have signed on to the charter as well.

Requiring algorithms to be documented in plain text might help mitigate their impact on people who are directly affected by allowing them to verify whether or not they were treated fairly. Plain-text documentation would allow people to read about how a computer reached a conclusion about them and provide an official way to question that decision if it appeared unfair.

Granted, there have been problems with this kind of policy in the past. New York City enacted an “algorithmic accountability” bill in 2018 that was intended to bring transparency to various automated systems used by the city government. Two years later, _CityLab _reported that bureaucratic roadblocks had stopped even the most basic transparency — a list of automated systems used by the city — from being granted to the task force saddled with implementing the policy.

Still, if implemented correctly, New Zealand’s charter could help citizens build better trust in how the government uses their data and guides their lives. A notable example of how this lack of trust affects government can be found in Google’s failure to get its experimental city startup, Sidewalk Labs, off the ground in Toronto.

#machine-learning #bias #debugger #artificial-intelligence #algorithms #algorithms

Larry  Kessler

Larry Kessler

1617422160

Top 5 Inductive Biases In Deep Learning Models

The learning algorithms mostly use some mechanisms or assumptions by either putting some restrictions on the space of hypotheses or can be said as the underlying model space. This mechanism is known as the Inductive Bias or Learning Bias.

This mechanism encourages the learning algorithms to prioritise solutions with specific properties. In simple words, learning bias or inductive bias is a set of implicit or explicit assumptions made by the machine learning algorithms to generalise a set of training data.

Here, we have compiled a list of five interesting inductive biases, in no particular order, which are used in deep learning.

#developers corner #ai biases #algorithm biases #deep learning #inductive bias deep learning #inductive biases

Madyson  Reilly

Madyson Reilly

1598278920

Bias-variance Trade-Off

Supervised Learning can be best understood by the help of Bias-Variance trade-off. The main aim of any model comes under Supervised learning is to estimate the target functions to predict the output with the help of input variables. Supervised learning consists of the Machine learning Algorithms, that are used for the data for its analysis by looking at its previous outcomes. Every action, has its outcomes or final target which helps it to be useful. Supervised Learning takes the help of the actions and its previous outcomes to analyze it and predict the possible outcomes of future. In Supervised Learning every algorithms function on some previous known data which is labeled; labeled here means that every information about the data is given. Algorithms is being trained on that labelled data repeatedly and then machine performs the actions based on that training to predict the outcomes. These predicted outcomes are more or less very similar to the past outcomes. This helps us to take decisions for the actions that hasn’t been occurred yet. Whether it is weather forecasting, predicting stock market price, house/property price, detecting email spam, recommendation system, self-driving car, churn modelling, sale of products etc., Supervised Learning comes into actions. In Supervised Learning, you supervise the learning process, meaning the data that you have collected here is labelled and so you know what input needs to be mapped to what output. it is the process of making an algorithm to learn to map an input to a particular output. This is achieved using the labelled datasets that you have collected. If the mapping is correct, the algorithm has successfully learned. Else, you make the necessary changes to the algorithm so that it can learn correctly. Supervised Learning algorithms can help make predictions for new unseen data that we obtain later in the future. It is as same as the teacher-student scenario. A teacher teaches the students to learn from the book (labelled datasets), and students learn from it and later on gives the test (prediction of algorithm) to pass. If the student fails (overfitting or underfitting), teacher tune the students (hyperparameter tuning) to perform better later on. But theirs a lot to catch-up between what is an ideal condition and what in practical possible. As no students (Algorithms) or teacher (datasets) can be 100 percent true or correct in their work. Same way, there are many advantages and disadvantages of every model and data that is been feeded into the model. Datasets might be unbalanced, consists of many missing values, improperly shaped and sized, can contains many outliers that makes any model task difficult to perform. Similarly, every model has its disadvantages or makes error in mapping the outputs. I will talk about these errors that prevent models to perform best and how can we overcome those errors.

Image for post

Photo by Max Chen on Unsplash

Before proceeding with the model training, we should know about the errors (bias and variance) related to it. If we know about it, not only it would help us with better model training but also, helps us to deal with underfitting and overfitting of model.

This predictive error is of three types:

1. Bias

2. Variance

3. Irreducible error

#bias-variance-tradeoff #bias #artificial-intelligence #algorithmic-bias #data-science

Algorithmic Bias, explained- for beginners

What is bias?

Algorithms are present everywhere around us. We as humans have constantly used and trusted them on a daily basis. They help to continually make countless decisions for us, be it right from small scale businesses to multi-national companies, all of these thrive on algorithmic decision making in crucial scenarios. Although they may seem to have an unbiased calculated nature, they aren’t any more objective than humans because at the end of the day they are written by humans.

This is where the occurrence of “Algorithmic Bias” comes into play. Algorithm bias is routed using machine learning and deep learning, these are the mechanisms by which computers make important decisions. Both of these techniques are immensely dependent on huge amount of data! This is where the type of data that is being entered comes into picture. Ultimately the people who feed this data play a significant role in this decision making as shown in the illustrated figure. Generally, a group of people enter the data into the training data for machine learning, so what goes wrong when these groups of people enter data in a biased manner?

Let’s take a real-world example: Company A’s (a mug producing company) image classification algorithm only classifies certain types of mugs to be listed in their algorithm. So what about mugs with different characteristics and features from diverse geographical locations?

Training data of Company A: All of the data contains handles and a specific shape as characteristics

Image classification input data: Does this mean the given mugs in the picture below cannot be identified since it has the handle missing or the handles placed in a new fashion as a characteristic?

This is just still a small problem to be precise, the bigger problems arise when racial, gender, age, legal differences on a global level crop up. Few of these real-world disasters of algorithmic bias involves:

· In the year 2015, Google’s popular photo recognition tool erroneously labelled a photo of two gorillas as people of color.

· In the US a crime predicting algorithm, incorrectly listed black people as repeated offenders.

These are just a few examples listed.

#data-science #algorithmic-bias #algorithms