From car insurance quotes to which posts you see on social media, our online lives are guided by invisible, inscrutable algorithms. They help private companies and governments make decisions — or automate them altogether — using massive amounts of data. But despite how crucial they are to everyday life, most people don’t understand how algorithms use their data to make decisions, which means serious problems can go undetected. (Take, for example, research last year that showed anti-Black bias in a widely used algorithm that helps hospitals identify patients in need of extra medical care.)

How Bias Ruins A.I.

In wake of Banjo CEO revelations, bias in A.I. comes under new scrutiny

onezero.medium.com

The New Zealand government has a plan to address this problem with what officials are calling the world’s first algorithm charter: a set of rules and principles for government agencies to follow when implementing algorithms that allow people to peek under the hood. By leading the way with responsible algorithm oversight, New Zealand hopes to set a model for other countries by demonstrating the value of transparency about how algorithms affect daily life.

Agencies that sign the charter make a number of commitments. For instance, they agree to publicly disclose in “plain English” when and how algorithms are used, ensure their algorithms do not perpetuate bias, and allow for a peer review to avoid “unintended consequences.”

The commitment also requires that the Te Ao Māori Indigenous perspective is included in the development of algorithms, as well as their use, and asks that agencies provide a point of contact that members of the public can use to inquire about algorithms, as well as challenge any decision made by an algorithm.

Given that algorithms are used by all facets of government, from calculating unemployment payments to how police patrol a neighborhood and profile people who live there, providing insight into how those algorithms truly work will help hold governments accountable to keeping them fair.

The charter has a big list of signatories so far, including the Ministry of Education, Ministry for the Environment, Statistics New Zealand, the New Zealand Defence Force, and many more. Notably missing from the list are the country’s police force and spy agencies like the Government Communications Security Bureau.

Though these issues can sound technical, algorithms in government can have huge impacts on public life. The New York Times reported in early 2020 that algorithms are used in the United States to “set police patrols, prison sentences and probation rules,” and in the Netherlands, “an algorithm flagged welfare fraud risks.”

There is rarely a way to see what data was used to reach these decisions, such as whether or not the algorithm considered gender, zip code, age, or any other number of factors, let alone if the data used to train the algorithm was fairly deployed in the first place. This can lead to “bias by proxy,” where a certain variable is used to determine a given outcome without an actual connection; for example, measuring a teacher’s effectiveness according to students’ scores on standardized tests when other systemic factors might be at work.

study by ProPublicafound that this kind of bias is commonplace, with a study of an algorithm used to generate a risk score for people arrested by a police department. Not only was the formula likely to “falsely flag Black defendants as future criminals,” but the study also found that “white defendants were mislabeled as low risk more often than black defendants.”

In New Zealand, biased algorithms are a problem as well, with The Guardian reporting that one of the charter signatories, the country’s Accident Compensation Authority, “was criticised in 2017 for using algorithms to detect fraud among those on its books.” Similar concerns about the correction agency and immigration authority have been raised in the past, both of which have signed on to the charter as well.

Requiring algorithms to be documented in plain text might help mitigate their impact on people who are directly affected by allowing them to verify whether or not they were treated fairly. Plain-text documentation would allow people to read about how a computer reached a conclusion about them and provide an official way to question that decision if it appeared unfair.

Granted, there have been problems with this kind of policy in the past. New York City enacted an “algorithmic accountability” bill in 2018 that was intended to bring transparency to various automated systems used by the city government. Two years later, _CityLab _reported that bureaucratic roadblocks had stopped even the most basic transparency — a list of automated systems used by the city — from being granted to the task force saddled with implementing the policy.

Still, if implemented correctly, New Zealand’s charter could help citizens build better trust in how the government uses their data and guides their lives. A notable example of how this lack of trust affects government can be found in Google’s failure to get its experimental city startup, Sidewalk Labs, off the ground in Toronto.

#machine-learning #bias #debugger #artificial-intelligence #algorithms #algorithms

New Zealand Has a Radical Idea for Fighting Algorithmic Bias
1.40 GEEK