Death sentences and race-ethnicity biases

This article addresses a very serious topic in our post-slavery, post-colonisation and post-segregation world. How is it that, while laws explicitly condemn disparate treatments or outcomes, there are still clear differences? These differences move part of the public opinion — and that should move everyone by the way, because this problem is not going to resolve only with minorities.

Image for post

This post will expose and explain a situation where Black Lives Mattered less. In Oklahoma, from 1990 to 2012, 143 offenders were sentenced to death. In 2017, Pierce et al. collected and analyzed the homicide data (4668 offenders) and the death sentences. It shows that there are several race/ethnicity effects involved in the death sentences outcomes. The paper contains all details. Terminology used (“nonwhite”, “race”) comes from the paper. I don’t like the “nonwhite” term, but I cannot easily replace it each time with Black, Hispanic, Native American and Asian. I will stick to it in this post.

1. Educated guess about the best/worst odds for death sentences?

Although the topic is very serious, I would like to know your first thoughts on what the potential biases could be. Assume that there is an offender, judged for a “willful (non-negligent) killing”. We don’t know anything about him.

What would decrease the probability that he is sentenced to death?

  1. Less sentence to death if the offender is white.
  2. Less sentence to death if the victim is not white.
  3. It’s about the same odds, justice system is fair, isn’t it?
  4. Your question is biased (please say how, it’s interesting).

What do you think? You can put your answer in the comments below.

2. From homicide offenders to death sentences, an apparent equality of outcomes

Let’s go back to Oklahoma, where authors have compiled homicides and death sentences for the 1990–2012 period .

Image for post

Homicide offenders and death sentences statistics, Oklahoma, 1990–2012. Source: Pierce et al. paper

An overview by race/ethnicity gives us a first glimpse of the story:

  • Homicide Rate (HR = HO/POP): Nonwhite people are clearly over-represented among homicide offenders. Although any prevention/mitigation strategy to reduce homicides should take it into account, we will follow the article and focus on death sentences rates, as it is at the heart of Justice system and not upstream.
  • Death Sentences Rate (DSR = DS/HO): DSR are very similar for white vs nonwhite homicide offenders (3.2% vs 3.0%).

The equality of DSR (Death Sentences Rate) is a central notion of fairness here and we will compute it along multiple dimensions. DSR is defined as the number of Death Sentences divided by number of Homicide Offenders.

Equality of outcomes means that similar homicides (under similar degree of felony circumstances) should lead to similar Death Sentences Rates (DSR).

So, DSR looks similar whatever the race/ethnicity of the offender. Has the full story being told for Oklahoma? No, it has not, there is far more.

#data-science #fairness #bias #data analysis

What is GEEK

Buddha Community

Death sentences and race-ethnicity biases

Death sentences and race-ethnicity biases

This article addresses a very serious topic in our post-slavery, post-colonisation and post-segregation world. How is it that, while laws explicitly condemn disparate treatments or outcomes, there are still clear differences? These differences move part of the public opinion — and that should move everyone by the way, because this problem is not going to resolve only with minorities.

Image for post

This post will expose and explain a situation where Black Lives Mattered less. In Oklahoma, from 1990 to 2012, 143 offenders were sentenced to death. In 2017, Pierce et al. collected and analyzed the homicide data (4668 offenders) and the death sentences. It shows that there are several race/ethnicity effects involved in the death sentences outcomes. The paper contains all details. Terminology used (“nonwhite”, “race”) comes from the paper. I don’t like the “nonwhite” term, but I cannot easily replace it each time with Black, Hispanic, Native American and Asian. I will stick to it in this post.

1. Educated guess about the best/worst odds for death sentences?

Although the topic is very serious, I would like to know your first thoughts on what the potential biases could be. Assume that there is an offender, judged for a “willful (non-negligent) killing”. We don’t know anything about him.

What would decrease the probability that he is sentenced to death?

  1. Less sentence to death if the offender is white.
  2. Less sentence to death if the victim is not white.
  3. It’s about the same odds, justice system is fair, isn’t it?
  4. Your question is biased (please say how, it’s interesting).

What do you think? You can put your answer in the comments below.

2. From homicide offenders to death sentences, an apparent equality of outcomes

Let’s go back to Oklahoma, where authors have compiled homicides and death sentences for the 1990–2012 period .

Image for post

Homicide offenders and death sentences statistics, Oklahoma, 1990–2012. Source: Pierce et al. paper

An overview by race/ethnicity gives us a first glimpse of the story:

  • Homicide Rate (HR = HO/POP): Nonwhite people are clearly over-represented among homicide offenders. Although any prevention/mitigation strategy to reduce homicides should take it into account, we will follow the article and focus on death sentences rates, as it is at the heart of Justice system and not upstream.
  • Death Sentences Rate (DSR = DS/HO): DSR are very similar for white vs nonwhite homicide offenders (3.2% vs 3.0%).

The equality of DSR (Death Sentences Rate) is a central notion of fairness here and we will compute it along multiple dimensions. DSR is defined as the number of Death Sentences divided by number of Homicide Offenders.

Equality of outcomes means that similar homicides (under similar degree of felony circumstances) should lead to similar Death Sentences Rates (DSR).

So, DSR looks similar whatever the race/ethnicity of the offender. Has the full story being told for Oklahoma? No, it has not, there is far more.

#data-science #fairness #bias #data analysis

Horse Racing Betting App Development I Horse Racing Betting Software Development

https://mobiwebtech.us/horse-racing-betting-app-development/

Mobiweb Technologies is a world-best Horse racing betting app development company. Horse racing betting app development experts at Mobiweb deliver real-time horse racing software solutions and provide scalable horse racing betting websites and apps.

#horse racing betting software development #horse racing betting app development #horse racing betting app developers #horse racing betting app development company #horse racing betting app development services

Larry  Kessler

Larry Kessler

1617422160

Top 5 Inductive Biases In Deep Learning Models

The learning algorithms mostly use some mechanisms or assumptions by either putting some restrictions on the space of hypotheses or can be said as the underlying model space. This mechanism is known as the Inductive Bias or Learning Bias.

This mechanism encourages the learning algorithms to prioritise solutions with specific properties. In simple words, learning bias or inductive bias is a set of implicit or explicit assumptions made by the machine learning algorithms to generalise a set of training data.

Here, we have compiled a list of five interesting inductive biases, in no particular order, which are used in deep learning.

#developers corner #ai biases #algorithm biases #deep learning #inductive bias deep learning #inductive biases

Daron  Moore

Daron Moore

1592138820

Five Cognitive Biases In Data Science (And how to avoid them)

Recently, I was reading Rolf Dobell’s The Art of Thinking Clearly, which made me think about cognitive biases in a way I never had before. I realized how deeply seated some cognitive biases are. In fact, we often don’t even consciously realize when our thinking is being affected by one. For data scientists, these biases can really change the way we work with data and make our day-to-day decisions, and generally not for the better.

As data scientists, our job is to make sense of the facts. In carrying out this analysis, we have to make subjective decisions, though. So even though we work with hard facts and data, there’s a strong interpretive component to data science.

As a result, we data scientists need to be extremely careful, because all humans are very much susceptible to cognitive biases. We’re no exception. In fact, I have seen many instances where data scientists ended up making decisions based on pre-existing beliefs, limited data or just irrational preferences.

#advice #bias #cognitive bias #confirmation bias #data science

Madyson  Reilly

Madyson Reilly

1598278920

Bias-variance Trade-Off

Supervised Learning can be best understood by the help of Bias-Variance trade-off. The main aim of any model comes under Supervised learning is to estimate the target functions to predict the output with the help of input variables. Supervised learning consists of the Machine learning Algorithms, that are used for the data for its analysis by looking at its previous outcomes. Every action, has its outcomes or final target which helps it to be useful. Supervised Learning takes the help of the actions and its previous outcomes to analyze it and predict the possible outcomes of future. In Supervised Learning every algorithms function on some previous known data which is labeled; labeled here means that every information about the data is given. Algorithms is being trained on that labelled data repeatedly and then machine performs the actions based on that training to predict the outcomes. These predicted outcomes are more or less very similar to the past outcomes. This helps us to take decisions for the actions that hasn’t been occurred yet. Whether it is weather forecasting, predicting stock market price, house/property price, detecting email spam, recommendation system, self-driving car, churn modelling, sale of products etc., Supervised Learning comes into actions. In Supervised Learning, you supervise the learning process, meaning the data that you have collected here is labelled and so you know what input needs to be mapped to what output. it is the process of making an algorithm to learn to map an input to a particular output. This is achieved using the labelled datasets that you have collected. If the mapping is correct, the algorithm has successfully learned. Else, you make the necessary changes to the algorithm so that it can learn correctly. Supervised Learning algorithms can help make predictions for new unseen data that we obtain later in the future. It is as same as the teacher-student scenario. A teacher teaches the students to learn from the book (labelled datasets), and students learn from it and later on gives the test (prediction of algorithm) to pass. If the student fails (overfitting or underfitting), teacher tune the students (hyperparameter tuning) to perform better later on. But theirs a lot to catch-up between what is an ideal condition and what in practical possible. As no students (Algorithms) or teacher (datasets) can be 100 percent true or correct in their work. Same way, there are many advantages and disadvantages of every model and data that is been feeded into the model. Datasets might be unbalanced, consists of many missing values, improperly shaped and sized, can contains many outliers that makes any model task difficult to perform. Similarly, every model has its disadvantages or makes error in mapping the outputs. I will talk about these errors that prevent models to perform best and how can we overcome those errors.

Image for post

Photo by Max Chen on Unsplash

Before proceeding with the model training, we should know about the errors (bias and variance) related to it. If we know about it, not only it would help us with better model training but also, helps us to deal with underfitting and overfitting of model.

This predictive error is of three types:

1. Bias

2. Variance

3. Irreducible error

#bias-variance-tradeoff #bias #artificial-intelligence #algorithmic-bias #data-science