Anti-racism, algorithmic bias, and policing: a brief introduction

Recently I’ve been interested in various questions relating to anti-racism, algorithmic bias, and policing.

What does anti-racist policing look like?

What do we mean by algorithmic bias and algorithmic fairness?

How can data science and machine learning practitioners ensure they are being anti-racist in their work?

Traditionally the purpose of policing has been to ensure the everyday safety of the general public. Often this has involved police forces responding to reports of suspected criminal activity. However, we may be entering a new age of policing. New technologies, including traditional data analysis as well as what might be called machine learning or AI, allow police forces to make predictions about suspected criminal activity that have not been possible until now.

We may be in a period of time where new technological developments have advanced at a faster rate than that of the regulation necessary in order to ensure the use of these technologies is safe. I think of this as the ‘safety gap’ or the ‘accountability gap’.

I hope to answer these questions relating to anti-racism, algorithmic bias, and policing, and introduce you to thinking about these issues relating to safety and accountability, using a few recent examples.

In July, MIT Technology Review published an article titled “Predictive policing algorithms are racist. They need to be dismantled.

Image for post

This article tells the story of an activist turned founder called Yeshimabeit Milner, who co-founded Data for Black Lives in 2017 to fight back against bias in the criminal justice system, and to dismantle the so-called school-to-prison pipeline.

Milner’s focus is on predictive policing tools and abuse of data by police forces.

According to the article, there are two broad types of predictive policing algorithm.

Location-based algorithms, which work by using places, events, historical crime rates, weather conditions, to create a crime ‘weather forecast’, e.g. PredPol, used by dozens of city police forces in the US.

#data #ethics #anti-racism #blacklivesmatter #machine-learning

What is GEEK

Buddha Community

Anti-racism, algorithmic bias, and policing: a brief introduction

Anti-racism, algorithmic bias, and policing: a brief introduction

Recently I’ve been interested in various questions relating to anti-racism, algorithmic bias, and policing.

What does anti-racist policing look like?

What do we mean by algorithmic bias and algorithmic fairness?

How can data science and machine learning practitioners ensure they are being anti-racist in their work?

Traditionally the purpose of policing has been to ensure the everyday safety of the general public. Often this has involved police forces responding to reports of suspected criminal activity. However, we may be entering a new age of policing. New technologies, including traditional data analysis as well as what might be called machine learning or AI, allow police forces to make predictions about suspected criminal activity that have not been possible until now.

We may be in a period of time where new technological developments have advanced at a faster rate than that of the regulation necessary in order to ensure the use of these technologies is safe. I think of this as the ‘safety gap’ or the ‘accountability gap’.

I hope to answer these questions relating to anti-racism, algorithmic bias, and policing, and introduce you to thinking about these issues relating to safety and accountability, using a few recent examples.

In July, MIT Technology Review published an article titled “Predictive policing algorithms are racist. They need to be dismantled.

Image for post

This article tells the story of an activist turned founder called Yeshimabeit Milner, who co-founded Data for Black Lives in 2017 to fight back against bias in the criminal justice system, and to dismantle the so-called school-to-prison pipeline.

Milner’s focus is on predictive policing tools and abuse of data by police forces.

According to the article, there are two broad types of predictive policing algorithm.

Location-based algorithms, which work by using places, events, historical crime rates, weather conditions, to create a crime ‘weather forecast’, e.g. PredPol, used by dozens of city police forces in the US.

#data #ethics #anti-racism #blacklivesmatter #machine-learning

Shawn  Durgan

Shawn Durgan

1595547778

10 Writing steps to create a good project brief - Mobile app development

Developing a mobile application can often be more challenging than it seems at first glance. Whether you’re a developer, UI designer, project lead or CEO of a mobile-based startup, writing good project briefs prior to development is pivotal. According to Tech Jury, 87% of smartphone users spend time exclusively on mobile apps, with 18-24-year-olds spending 66% of total digital time on mobile apps. Of that, 89% of the time is spent on just 18 apps depending on individual users’ preferences, making proper app planning crucial for success.

Today’s audiences know what they want and don’t want in their mobile apps, encouraging teams to carefully write their project plans before they approach development. But how do you properly write a mobile app development brief without sacrificing your vision and staying within the initial budget? Why should you do so in the first place? Let’s discuss that and more in greater detail.

Why a Good Mobile App Project Brief Matters?

Why-a-Good-Mobile-App-Project-Brief-Matters

It’s worth discussing the significance of mobile app project briefs before we tackle the writing process itself. In practice, a project brief is used as a reference tool for developers to remain focused on the client’s deliverables. Approaching the development process without written and approved documentation can lead to drastic, last-minute changes, misunderstanding, as well as a loss of resources and brand reputation.

For example, developing a mobile app that filters restaurants based on food type, such as Happy Cow, means that developers should stay focused on it. Knowing that such and such features, UI elements, and API are necessary will help team members collaborate better in order to meet certain expectations. Whether you develop an app under your brand’s banner or outsource coding and design services to would-be clients, briefs can provide you with several benefits:

  • Clarity on what your mobile app project “is” and “isn’t” early in development
  • Point of reference for developers, project leads, and clients throughout the cycle
  • Smart allocation of available time and resources based on objective development criteria
  • Streamlined project data storage for further app updates and iterations

Writing Steps to Create a Good Mobile App Project Brief

Writing-Steps-to-Create-a-Good-Mobile-App-Project-Brief

1. Establish the “You” Behind the App

Depending on how “open” your project is to the public, you will want to write a detailed section about who the developers are. Elements such as company name, address, project lead, project title, as well as contact information, should be included in this introductory segment. Regardless of whether you build an in-house app or outsource developers to a client, this section is used for easy document storage and access.

#android app #ios app #minimum viable product (mvp) #mobile app development #web development #how do you write a project design #how to write a brief #how to write a project summary #how to write project summary #program brief example #project brief #project brief example #project brief template #project proposal brief #simple project brief template

Siphiwe  Nair

Siphiwe Nair

1624027140

Are there Biases in Big Data Algorithms. What can we do?

Big Data and Machine Learning appear to be the advanced buzzword answers for each issue. Sectors, for example, fraud prevention, healthcare, and sales are only a couple of the places that are thought to profit by self-learning and improving machines that can be trained on colossal datasets.

Notwithstanding, how cautiously do we examine these algorithms and research potential biases that could affect results?

Companies utilize different sorts of big data analytics to make decisions, correlations, and anticipate about their constituents or partners. The market for data is huge and developing quickly; it’s assessed to hit $100 billion before the decade’s end.

Data and data sets are not unbiased; they are manifestations of human design. We give numbers their voice, draw insights from them, and define their significance through our understandings. Hidden biases in both the analysis stages present extensive risks, and are as essential to the big-data equation as the numbers themselves.

While such complex datasets may contain important data on why customers decide to purchase certain items and not others, the scale and size of the available information makes it unworkable for an individual to analyse it and recognize any patterns present.

This is the reason machine learning is frequently regarded as the solution to the ‘Big Data Problem.’ Automation of the analysis is one way to deal with deconstructing such datasets, however regular algorithms should be pre-programmed to think about specific factors and search for specific levels of significance.

Algorithms of this sort have existed for quite a long time and a lot of the time are utilized by companies to have the option to scale their tasks, by utilizing repeatable patterns that can be applied to everybody.

This implies that, regardless of whether you’re keen on big data, algorithms, and tech, or not, you’re a part of this today, and it will influence you to an ever-increasing extent.

#big data #latest news #biases in big data algorithms #are there biases in big data algorithms. what can we do? #algorithms #web

Agnes  Sauer

Agnes Sauer

1594945920

The Enduring Anti-Black Racism of Google Search

When Algorithms of Oppression was published in 2018, it was a landmark work that interrogated the racism encoded into popular technology products like Google’s search engine. Given that many Americans are currently using Google search to try to understand racism after the national uprising sparked by the murder of George Floyd, it’s a good time to remember the architecture they are using to do so is itself deeply compromised — and how that came to pass. This excerpt, from Safiya Umoja Noble’s enduring work, explains why anti-Black racism appears, and endures, in tech products we are told to view as neutral.


On June 28, 2016, Black feminist and mainstream social media erupted over the announcement that Black Girls Code, an organization dedicated to teaching and mentoring African American girls interested in computer programming, would be moving into Google’s New York offices. The partnership was part of Google’s effort to spend $150 million on diversity programs that could create a pipeline of talent into Silicon Valley and the tech industries. But just two years before, searching the phrase “black girls” surfaced “Black Booty on the Beach” and “Sugary Black Pussy” to the first page of Google results, out of the trillions of web-indexed pages that Google Search crawls.

In part, the intervention of teaching computer code to African American girls through projects such as Black Girls Code is designed to ensure fuller participation in the design of software and to remedy persistent exclusion. The logic of new pipeline investments in youth was touted as an opportunity to foster an empowered vision for Black women’s participation in Silicon Valley industries. Discourses of creativity, cultural context, and freedom are fundamental narratives that drive the coding gap, or the new coding divide, of the 21st century.

Neoliberalism has emerged and served as a framework for developing social and economic policy in the interest of elites, while simultaneously crafting a new worldview: an ideology of individual freedoms that foreground personal creativity, contribution, and participation, as if these engagements are not interconnected to broader labor practices of systemic and structural exclusion. In the case of Google’s history of racist bias in search, no linkages are made between Black Girls Code and remedies to the company’s current employment practices and product designs. Indeed, the notion that lack of participation by African Americans in Silicon Valley is framed as a “pipeline issue” posits the lack of hiring Black people as a matter of people unprepared to participate, despite evidence to the contrary.

Google, Facebook, and other technology giants have been called to task for this failed logic. Laura Weidman Powers, of CODE2040, stated in an interview with Jessica Guynn at USA Today, “This narrative that nothing can be done today and so we must invest in the youth of tomorrow ignores the talents and achievements of the thousands of people in tech from underrepresented backgrounds and renders them invisible.” Blacks and Latinos are underemployed despite the increasing numbers graduating from college with degrees in computer science.

Filling the pipeline and holding “future” Black women programmers responsible for solving the problems of racist exclusion and misrepresentation in Silicon Valley or in biased product development is not the answer. Commercial search prioritizes results predicated on a variety of factors that are anything but objective or value-free. Indeed, there are infinite possibilities for other ways of designing access to knowledge and information, but the lack of attention to the kind of White and Asian male dominance that Guynn reported sidesteps those who are responsible for these companies’ current technology designers and their troublesome products.

Framing the problems as “pipeline” issues instead of as an issue of racism and sexism, which extends from employment practices to product design. “Black girls need to learn how to code” is an excuse for not addressing the persistent marginalization of Black women in Silicon Valley.

Who is responsible for the results?

As a result of the lack of African Americans and people with deeper knowledge of the sordid history of racism and sexism working in Silicon Valley, products are designed with a lack of careful analysis about their potential impact on a diverse array of people. If Google software engineers are not responsible for the design of their algorithms, then who is?

These are the details of what a search for “black girls” would yield for many years, despite that the words “porn,” “pornography,” or “sex” were not included in the search box. In the text for the first page of results, for example, the word “pussy,” as a noun, is used four times to describe Black girls. Other words in the lines of text on the first page include “sugary” (two times), “hairy” (one), “sex” (one), “booty/ass” (two), “teen” (one), “big” (one), “porn star” (one), “hot” (one), “hardcore” (one), “action” (one), “galeries [sic]” (one).

In the case of the first page of results on “black girls,” I clicked on the link for both the top search result (unpaid) and the first paid result, which is reflected in the right-hand sidebar, where advertisers that are willing and able to spend money through Google AdWords have their content appear in relationship to these search queries.

All advertising in relationship to Black girls for many years has been hypersexualized and pornographic, even if it purports to be just about dating or social in nature. Additionally, some of the results such as the U.K. rock band Black Girls lack any relationship to Black women and girls. This is an interesting co-optation of identity, and because of the band’s fan following as well as possible search engine optimization strategies, the band is able to find strong placement for its fan site on the front page of the Google search.

Published text on the web can have a plethora of meanings, so in my analysis of all of these results, I have focused on the implicit and explicit messages about Black women and girls in both the texts of results or hits and the paid ads that accompany them. By comparing these to broader social narratives about Black women and girls in dominant U.S. popular culture, we can see the ways in which search engine technology replicates and instantiates these notions.

#google #racism #women #algorithms #technology #algorithms

Noah  Rowe

Noah Rowe

1596736980

New Zealand Has a Radical Idea for Fighting Algorithmic Bias

From car insurance quotes to which posts you see on social media, our online lives are guided by invisible, inscrutable algorithms. They help private companies and governments make decisions — or automate them altogether — using massive amounts of data. But despite how crucial they are to everyday life, most people don’t understand how algorithms use their data to make decisions, which means serious problems can go undetected. (Take, for example, research last year that showed anti-Black bias in a widely used algorithm that helps hospitals identify patients in need of extra medical care.)

How Bias Ruins A.I.

In wake of Banjo CEO revelations, bias in A.I. comes under new scrutiny

onezero.medium.com

The New Zealand government has a plan to address this problem with what officials are calling the world’s first algorithm charter: a set of rules and principles for government agencies to follow when implementing algorithms that allow people to peek under the hood. By leading the way with responsible algorithm oversight, New Zealand hopes to set a model for other countries by demonstrating the value of transparency about how algorithms affect daily life.

Agencies that sign the charter make a number of commitments. For instance, they agree to publicly disclose in “plain English” when and how algorithms are used, ensure their algorithms do not perpetuate bias, and allow for a peer review to avoid “unintended consequences.”

The commitment also requires that the Te Ao Māori Indigenous perspective is included in the development of algorithms, as well as their use, and asks that agencies provide a point of contact that members of the public can use to inquire about algorithms, as well as challenge any decision made by an algorithm.

Given that algorithms are used by all facets of government, from calculating unemployment payments to how police patrol a neighborhood and profile people who live there, providing insight into how those algorithms truly work will help hold governments accountable to keeping them fair.

The charter has a big list of signatories so far, including the Ministry of Education, Ministry for the Environment, Statistics New Zealand, the New Zealand Defence Force, and many more. Notably missing from the list are the country’s police force and spy agencies like the Government Communications Security Bureau.

Though these issues can sound technical, algorithms in government can have huge impacts on public life. The New York Times reported in early 2020 that algorithms are used in the United States to “set police patrols, prison sentences and probation rules,” and in the Netherlands, “an algorithm flagged welfare fraud risks.”

There is rarely a way to see what data was used to reach these decisions, such as whether or not the algorithm considered gender, zip code, age, or any other number of factors, let alone if the data used to train the algorithm was fairly deployed in the first place. This can lead to “bias by proxy,” where a certain variable is used to determine a given outcome without an actual connection; for example, measuring a teacher’s effectiveness according to students’ scores on standardized tests when other systemic factors might be at work.

study by ProPublicafound that this kind of bias is commonplace, with a study of an algorithm used to generate a risk score for people arrested by a police department. Not only was the formula likely to “falsely flag Black defendants as future criminals,” but the study also found that “white defendants were mislabeled as low risk more often than black defendants.”

In New Zealand, biased algorithms are a problem as well, with The Guardian reporting that one of the charter signatories, the country’s Accident Compensation Authority, “was criticised in 2017 for using algorithms to detect fraud among those on its books.” Similar concerns about the correction agency and immigration authority have been raised in the past, both of which have signed on to the charter as well.

Requiring algorithms to be documented in plain text might help mitigate their impact on people who are directly affected by allowing them to verify whether or not they were treated fairly. Plain-text documentation would allow people to read about how a computer reached a conclusion about them and provide an official way to question that decision if it appeared unfair.

Granted, there have been problems with this kind of policy in the past. New York City enacted an “algorithmic accountability” bill in 2018 that was intended to bring transparency to various automated systems used by the city government. Two years later, _CityLab _reported that bureaucratic roadblocks had stopped even the most basic transparency — a list of automated systems used by the city — from being granted to the task force saddled with implementing the policy.

Still, if implemented correctly, New Zealand’s charter could help citizens build better trust in how the government uses their data and guides their lives. A notable example of how this lack of trust affects government can be found in Google’s failure to get its experimental city startup, Sidewalk Labs, off the ground in Toronto.

#machine-learning #bias #debugger #artificial-intelligence #algorithms #algorithms