Angela  Dickens

Angela Dickens

1595566200

Researchers Look at How ‘Algorithmic Coloniality’

As artificial intelligence (AI) is increasingly transforming our world, a new paper suggests a way to re-examine the society we’re already living in now to chart a better way forward. “Computer systems embody values,” explained paper co-author Shakir Mohamed, “And to build a deeper understanding of values and power is why we turn to the critical theory and especially decolonial theories.”

The paper defines “Decolonisation” as the “the intellectual, political, economic and societal work concerned with the restoration of land and life following the end of historical colonial periods,” the paper asserts. It seeks to root out the vestiges of this thinking that are still with us today, including such unhealthy traits as “Territorial appropriation, exploitation of the natural environment and of human labor, and direct control of social structures are the characteristics of historical colonialism.”

Mohamed is a research scientist in statistical machine learning and AI at DeepMind, an AI research company. He teamed up with DeepMind senior research scientist William Isaac, and with Marie-Therese Png, a Ph.D. candidate studying algorithmic coloniality at the Oxford Internet Institute. Together they’ve produced a 28-page paper exploring a role for two kinds of theories — both post-colonial and decolonial — “in understanding and shaping the ongoing advances in artificial intelligence.”

The paper includes a warning that AI systems “pose significant risks, especially to already vulnerable peoples.” But in the end, it also attempts to provide some workable solutions.

Critical Perspectives

Weapons_of_Math_Destruction book cover (via Wikipedia)

The researchers’ paper cites Cathy O’Neil’s 2016 book “Weapons of math destruction,” which argues “big data increases inequality and threatens democracy,” in high-stakes areas including policing, lending, and insurance.

For algorithmic (or automated) oppression in action, the paper points to “predictive” surveillance systems that “risk entrenching historical injustice and amplify[ing] social biases in the data used to develop them,” as well as algorithmic “decision systems” used in the U.S. criminal justice system “despite significant evidence of shortcomings, such as the linking of criminal datasets to patterns of discriminatory policing.”

Commenting on the work, VentureBeat suggests the authors “incorporate a sentiment expressed in an open letter Black members of the AI and computing community released last month during Black Lives Matter protests, which asks AI practitioners to recognize the ways their creations may support racism and systemic oppression in areas like housing, education, health care, and employment.”

But though it’s a very timely paper, that’s mostly a coincidence, says co-author William Isaac, a senior research scientist at DeepMind. He told me the paper had its roots in a blog post by Shakir Mohamed almost two years ago outlining some of the initial ideas, influenced by work in related areas like data colonialism. Then last year co-author Marie-Therese Png had helped organize a panel during Oxford’s Intercultural Digital Ethics Symposium, which led to the paper.

In the paper, the researchers provide a stunning example of a widely-used algorithmic screening tool for a “high-risk care management” healthcare program in 2002 which, it turned out “relied on the predictive utility of an individual’s health expenses.” The end result? Black patients were rejected for the healthcare program more often than white patients, “exacerbating structural inequities in the US healthcare system.”

The paper also looks at how algorithm-using industries and institutional actors “take advantage of (often already marginalized) people by unfair or unethical means,” including the “ghost workers” who label training data, a phenomenon which involves populations along what one researcher called “the old fault lines of colonialism.” And the paper provides examples of what it calls “clearly exploitative situations, where organizations use countries outside of their own as testing grounds — specifically because they lack pre-existing safeguards and regulations around data and its use, or because the mode of testing would violate laws in their home countries.”

They cite the example of Cambridge Analytica, which according to Nanjala Nyabola’s “Digital Democracy, Analogue Politics” beta-tested algorithms for influencing voters during elections in Kenya and Nigeria in part because those countries had weak data protection laws.

#culture #machine learning #profile #algorithms

What is GEEK

Buddha Community

Researchers Look at How ‘Algorithmic Coloniality’
Angela  Dickens

Angela Dickens

1595566200

Researchers Look at How ‘Algorithmic Coloniality’

As artificial intelligence (AI) is increasingly transforming our world, a new paper suggests a way to re-examine the society we’re already living in now to chart a better way forward. “Computer systems embody values,” explained paper co-author Shakir Mohamed, “And to build a deeper understanding of values and power is why we turn to the critical theory and especially decolonial theories.”

The paper defines “Decolonisation” as the “the intellectual, political, economic and societal work concerned with the restoration of land and life following the end of historical colonial periods,” the paper asserts. It seeks to root out the vestiges of this thinking that are still with us today, including such unhealthy traits as “Territorial appropriation, exploitation of the natural environment and of human labor, and direct control of social structures are the characteristics of historical colonialism.”

Mohamed is a research scientist in statistical machine learning and AI at DeepMind, an AI research company. He teamed up with DeepMind senior research scientist William Isaac, and with Marie-Therese Png, a Ph.D. candidate studying algorithmic coloniality at the Oxford Internet Institute. Together they’ve produced a 28-page paper exploring a role for two kinds of theories — both post-colonial and decolonial — “in understanding and shaping the ongoing advances in artificial intelligence.”

The paper includes a warning that AI systems “pose significant risks, especially to already vulnerable peoples.” But in the end, it also attempts to provide some workable solutions.

Critical Perspectives

Weapons_of_Math_Destruction book cover (via Wikipedia)

The researchers’ paper cites Cathy O’Neil’s 2016 book “Weapons of math destruction,” which argues “big data increases inequality and threatens democracy,” in high-stakes areas including policing, lending, and insurance.

For algorithmic (or automated) oppression in action, the paper points to “predictive” surveillance systems that “risk entrenching historical injustice and amplify[ing] social biases in the data used to develop them,” as well as algorithmic “decision systems” used in the U.S. criminal justice system “despite significant evidence of shortcomings, such as the linking of criminal datasets to patterns of discriminatory policing.”

Commenting on the work, VentureBeat suggests the authors “incorporate a sentiment expressed in an open letter Black members of the AI and computing community released last month during Black Lives Matter protests, which asks AI practitioners to recognize the ways their creations may support racism and systemic oppression in areas like housing, education, health care, and employment.”

But though it’s a very timely paper, that’s mostly a coincidence, says co-author William Isaac, a senior research scientist at DeepMind. He told me the paper had its roots in a blog post by Shakir Mohamed almost two years ago outlining some of the initial ideas, influenced by work in related areas like data colonialism. Then last year co-author Marie-Therese Png had helped organize a panel during Oxford’s Intercultural Digital Ethics Symposium, which led to the paper.

In the paper, the researchers provide a stunning example of a widely-used algorithmic screening tool for a “high-risk care management” healthcare program in 2002 which, it turned out “relied on the predictive utility of an individual’s health expenses.” The end result? Black patients were rejected for the healthcare program more often than white patients, “exacerbating structural inequities in the US healthcare system.”

The paper also looks at how algorithm-using industries and institutional actors “take advantage of (often already marginalized) people by unfair or unethical means,” including the “ghost workers” who label training data, a phenomenon which involves populations along what one researcher called “the old fault lines of colonialism.” And the paper provides examples of what it calls “clearly exploitative situations, where organizations use countries outside of their own as testing grounds — specifically because they lack pre-existing safeguards and regulations around data and its use, or because the mode of testing would violate laws in their home countries.”

They cite the example of Cambridge Analytica, which according to Nanjala Nyabola’s “Digital Democracy, Analogue Politics” beta-tested algorithms for influencing voters during elections in Kenya and Nigeria in part because those countries had weak data protection laws.

#culture #machine learning #profile #algorithms

A greedy algorithm is a simple

The Greedy Method is an approach for solving certain types of optimization problems. The greedy algorithm chooses the optimum result at each stage. While this works the majority of the times, there are numerous examples where the greedy approach is not the correct approach. For example, let’s say that you’re taking the greedy algorithm approach to earning money at a certain point in your life. You graduate high school and have two options:

#computer-science #algorithms #developer #programming #greedy-algorithms #algorithms

Importance of Market Research Before You Get a Mobile App Developed

With the infusion and escalated use of technology, the mobile app market has grown by leaps and bounds, especially in the last decade. Further estimated researches show that the size of the mobile app market will reach $407.31 billion by the year 2026. Thus, if you have some trending app ideas then this is the best time to invest in mobile app development.

Since the mobile app market is very dynamic, keeping up with the changing trends becomes very important. For doing this, in-depth market research before developing a mobile app is a very vital factor. The importance of mobile app market research can be realized from the fact that it gives the company valuable insights into their competitors. In addition, it gives you a clear idea about your own strengths and weaknesses as well.

The performance of mobile apps in the modern era depends significantly on the behind-the-scenes research work. Many studies have shown that mobile apps developed without adequate research die a premature death at the app store. Statically, around 72% of the upcoming mobile apps fail to make their mark on the market because of lousy research work.

This article will discuss why market research for your mobile application is needed and the difference it can bring into your overall business ROI.

How Market Research can help with successful Mobile App Development?
“In the long run, curiosity-driven research just works better. Real breakthroughs come from people focusing on what they’re excited about.” – Geoffrey Hinton, Psychologist and Computer Scientist

The business of mobile app development is very dynamic and it purely depends on the changing needs and wants of the customers. Doing market research is very crucial to build your stunning mobile app. Therefore, the businesses need to remain on the top of their game as far as knowing the latest market trends are concerned.

Due to its ever-changing nature, developing a mobile app is a very tough nut to crack. And without adequate market research, the whole process can become directionless in no time. The businesses wouldn’t know who their targeted audience are, which market is best for their mobile application, and most importantly the set of features to be bifurcated as must haves’ and which features to be added later as additional ones’.

In addition, full-throttle market research also helps in keeping the app development project within budget. It also assists the marketing team in coming up with unique ideas and enhancing the mobile application’s popularity.

A detailed analysis of the market for your business app will give you valuable insights and stop you from making terrible mistakes. Moreover, as you will understand the customers’ pain points, you wouldn’t cram the mobile app with unnecessary features.

With an increase in the number of options available, the patience levels of the users are declining at a rapid rate. They will instantly discard the mobile app if it keeps them waiting to do important tasks.

The in-depth market research also helps you become the pioneer in your industry using mobile apps as the platform to help your business reach greater heights.

Advantages of market research
The benefits that the businesses gain from market research for getting a cutting-edge mobile app developed are immense. We have listed some of the top advantages below:

Faster data collection
Market research for mobile apps allows faster data collection. This is one of the prominent answers to the question ‘why is app market research recommended as a must-do strategy?’ As the new-age customers use their smartphones more than any other device, it is easier to get faster responses through an app.

Better Insights
While doing in-depth research, the insights are not limited just to text-based questions. You can gauge the customer’s behavior from various touchpoints like their social media platform, photos, audio, video, etc. The market research gives you a more diverse data set for further analysis.

Enhances the brand value
The needs and wants of the consumers are constantly changing. Thus, it becomes difficult for businesses to find their core audience to target or which sections of a market will be more interested in their mobile application. Market research helps in fostering the customer-brand relationship and eventually increasing the brand value.

Better results
Through stoic mobile app market research, you can increase the usability of your mobile app. Various market research shows that 80% of smartphone users check their phone within 15 minutes of waking up. Thus, the more you engage with your customers on the mobile app, the more chances you will grow your business.

The process to conduct mobile app market research
It is inevitable for the business houses to have a clear idea about conducting their market research. Moreover, there is no explicit template that defines ways to conduct market research. In the quest to stand unique from its peers, each business conducts the research in its own way.

Broadly, the process of market research is categorized into two major categories:

Primary Research:
In Primary Research, the company must define the actual need of the app in the market. After this, they must design an optimal business model to keep the app relevant in the market. Once the business model is finalized, optimizing the marketing strategy for the app becomes the next step. Marketing strategy holds much more significance in the modern market as customers are more inclined towards personalized services.

Secondary Research:
Secondary Research mainly focuses on the main strength of the mobile app. When you define the core strength of the mobile app, bifurcating your target audience becomes easy. In addition, the company can optimize its social media strategy and cater to each of their targeted customers individually.

Continue to read : Importance-market-research-before-mobile-app-developed

#importance-market-research #mobile-app #market-research #appmarket-research #app-market-research

Tia  Gottlieb

Tia Gottlieb

1596427800

KMP — Pattern Matching Algorithm

Finding a certain piece of text inside a document represents an important feature nowadays. This is widely used in many practical things that we regularly do in our everyday lives, such as searching for something on Google or even plagiarism. In small texts, the algorithm used for pattern matching doesn’t require a certain complexity to behave well. However, big processes like searching the word ‘cake’ in a 300 pages book can take a lot of time if a naive algorithm is used.

The naive algorithm

Before, talking about KMP, we should analyze the inefficient approach for finding a sequence of characters into a text. This algorithm slides over the text one by one to check for a match. The complexity provided by this solution is O (m * (n — m + 1)), where m is the length of the pattern and n the length of the text.

Find all the occurrences of string pat in string txt (naive algorithm).

#include <iostream>
	#include <string>
	#include <algorithm>
	using namespace std;

	string pat = "ABA"; // the pattern
	string txt = "CABBCABABAB"; // the text in which we are searching

	bool checkForPattern(int index, int patLength) {
	    int i;
	    // checks if characters from pat are different from those in txt
	    for(i = 0; i < patLength; i++) {
	        if(txt[index + i] != pat[i]) {
	            return false;
	        }
	    }
	    return true;
	}

	void findPattern() {
	    int patternLength = pat.size();
	    int textLength = txt.size();

	    for(int i = 0; i <= textLength - patternLength; i++) {
	        // check for every index if there is a match
	        if(checkForPattern(i,patternLength)) {
	            cout << "Pattern at index " << i << "\n";
	        }
	    }

	}

	int main() 
	{
	    findPattern();
	    return 0;
	}
view raw
main6.cpp hosted with ❤ by GitHub

KMP approach

This algorithm is based on a degenerating property that uses the fact that our pattern has some sub-patterns appearing more than once. This approach is significantly improving our complexity to linear time. The idea is when we find a mismatch, we already know some of the characters in the next searching window. This way we save time by skip matching the characters that we already know will surely match. To know when to skip, we need to pre-process an auxiliary array prePos in our pattern. prePos will hold integer values that will tell us the count of characters to be jumped. This supporting array can be described as the longest proper prefix that is also a suffix.

#programming #data-science #coding #kmp-algorithm #algorithms #algorithms

Beth  Nabimanya

Beth Nabimanya

1624867080

Algorithm trading backtest and optimization examples

Algorithm trading backtest and optimization examples

Algorithmic trading backtests

Algorithm trading backtest and optimization examples.

xbtusd-vanila-market-making-backtest-hedge

xbtusd-vanila-market-making-backtest-hedge

#algorithms #optimization examples #algorithm trading backtest #algorithm #trading backtest