1601172000
If you have ever been tasked with productionalizing a machine learning model, you probably know that Scikit-Learn library offers one of the best ways — if not the best way — of creating production-quality machine learning workflows. The ecosystem’s Pipeline, ColumnTransformer, preprocessors, imputers & feature selection classes are powerful tools that transform raw data into model-ready features.
However, before anyone is going to let you deploy to production, you are going to want to have some rudimentary understanding of how the new model works. The most common way to explain how a black-box model works is by plotting feature names and importance values. If you have ever tried to extract the feature names from a heterogeneous dataset processed by ColumnTransformer, you know that this is no easy task. Exhaustive Internet searches have only brought to my attention where others have asked the same question or offered a partial answer, instead of yielding a comprehensive and satisfying solution.
To remedy this situation, I have developed a class called FeatureImportance
that will extract feature names and importance values from a Pipeline instance. It then uses the Plotly library to plot the feature importance using only a few lines of code. In this post, I will load a fitted Pipeline, demonstrate how to use my class and then give an overview of how it works. The complete code can be found here or at the end of this blog post.
There are two things I should note before continuing:
#programming #data-science #machine-learning
1652748716
Exploratory data analysis is used by data scientists to analyze and investigate data sets and summarize their main characteristics, often employing data visualization methods. It helps determine how best to manipulate data sources to get the answers you need, making it easier for data scientists to discover patterns, spot anomalies, test a hypothesis, or check assumptions. EDA is primarily used to see what data can reveal beyond the formal modeling or hypothesis testing task and provides a better understanding of data set variables and the relationships between them. It can also help determine if the statistical techniques you are considering for data analysis are appropriate or not.
🔹 Topics Covered:
00:00:00 Basics of EDA with Python
01:40:10 Multiple Variate Analysis
02:30:26 Outlier Detection
03:44:48 Cricket World Cup Analysis using Exploratory Data Analysis
If we want to explain EDA in simple terms, it means trying to understand the given data much better, so that we can make some sense out of it.
We can find a more formal definition in Wikipedia.
In statistics, exploratory data analysis is an approach to analyzing data sets to summarize their main characteristics, often with visual methods. A statistical model can be used or not, but primarily EDA is for seeing what the data can tell us beyond the formal modeling or hypothesis testing task.
EDA in Python uses data visualization to draw meaningful patterns and insights. It also involves the preparation of data sets for analysis by removing irregularities in the data.
Based on the results of EDA, companies also make business decisions, which can have repercussions later.
In this article we’ll see about the following topics:
Data Sourcing is the process of finding and loading the data into our system. Broadly there are two ways in which we can find data.
Private Data
As the name suggests, private data is given by private organizations. There are some security and privacy concerns attached to it. This type of data is used for mainly organizations internal analysis.
Public Data
This type of Data is available to everyone. We can find this in government websites and public organizations etc. Anyone can access this data, we do not need any special permissions or approval.
We can get public data on the following sites.
The very first step of EDA is Data Sourcing, we have seen how we can access data and load into our system. Now, the next step is how to clean the data.
After completing the Data Sourcing, the next step in the process of EDA is Data Cleaning. It is very important to get rid of the irregularities and clean the data after sourcing it into our system.
Irregularities are of different types of data.
To perform the data cleaning we are using a sample data set, which can be found here.
We are using Jupyter Notebook for analysis.
First, let’s import the necessary libraries and store the data in our system for analysis.
#import the useful libraries.
import numpy as np
import pandas as pd
import seaborn as sns
import matplotlib.pyplot as plt
%matplotlib inline
# Read the data set of "Marketing Analysis" in data.
data= pd.read_csv("marketing_analysis.csv")
# Printing the data
data
Now, the data set looks like this,
If we observe the above dataset, there are some discrepancies in the Column header for the first 2 rows. The correct data is from the index number 1. So, we have to fix the first two rows.
This is called Fixing the Rows and Columns. Let’s ignore the first two rows and load the data again.
#import the useful libraries.
import numpy as np
import pandas as pd
import seaborn as sns
import matplotlib.pyplot as plt
%matplotlib inline
# Read the file in data without first two rows as it is of no use.
data = pd.read_csv("marketing_analysis.csv",skiprows = 2)
#print the head of the data frame.
data.head()
Now, the dataset looks like this, and it makes more sense.
Dataset after fixing the rows and columns
Following are the steps to be taken while Fixing Rows and Columns:
Now if we observe the above dataset, the customerid
column has of no importance to our analysis, and also the jobedu
column has both the information of job
and education
in it.
So, what we’ll do is, we’ll drop the customerid
column and we’ll split the jobedu
column into two other columns job
and education
and after that, we’ll drop the jobedu
column as well.
# Drop the customer id as it is of no use.
data.drop('customerid', axis = 1, inplace = True)
#Extract job & Education in newly from "jobedu" column.
data['job']= data["jobedu"].apply(lambda x: x.split(",")[0])
data['education']= data["jobedu"].apply(lambda x: x.split(",")[1])
# Drop the "jobedu" column from the dataframe.
data.drop('jobedu', axis = 1, inplace = True)
# Printing the Dataset
data
Now, the dataset looks like this,
Dropping Customerid
and jobedu columns and adding job and education columns
Missing Values
If there are missing values in the Dataset before doing any statistical analysis, we need to handle those missing values.
There are mainly three types of missing values.
Let’s see which columns have missing values in the dataset.
# Checking the missing values
data.isnull().sum()
The output will be,
As we can see three columns contain missing values. Let’s see how to handle the missing values. We can handle missing values by dropping the missing records or by imputing the values.
Drop the missing Values
Let’s handle missing values in the age
column.
# Dropping the records with age missing in data dataframe.
data = data[~data.age.isnull()].copy()
# Checking the missing values in the dataset.
data.isnull().sum()
Let’s check the missing values in the dataset now.
Let’s impute values to the missing values for the month column.
Since the month column is of an object type, let’s calculate the mode of that column and impute those values to the missing values.
# Find the mode of month in data
month_mode = data.month.mode()[0]
# Fill the missing values with mode value of month in data.
data.month.fillna(month_mode, inplace = True)
# Let's see the null values in the month column.
data.month.isnull().sum()
Now output is,
# Mode of month is
'may, 2017'
# Null values in month column after imputing with mode
0
Handling the missing values in the Response column. Since, our target column is Response Column, if we impute the values to this column it’ll affect our analysis. So, it is better to drop the missing values from Response Column.
#drop the records with response missing in data.
data = data[~data.response.isnull()].copy()
# Calculate the missing values in each column of data frame
data.isnull().sum()
Let’s check whether the missing values in the dataset have been handled or not,
All the missing values have been handled
We can also, fill the missing values as ‘NaN’ so that while doing any statistical analysis, it won’t affect the outcome.
Handling Outliers
We have seen how to fix missing values, now let’s see how to handle outliers in the dataset.
Outliers are the values that are far beyond the next nearest data points.
There are two types of outliers:
So, after understanding the causes of these outliers, we can handle them by dropping those records or imputing with the values or leaving them as is, if it makes more sense.
Standardizing Values
To perform data analysis on a set of values, we have to make sure the values in the same column should be on the same scale. For example, if the data contains the values of the top speed of different companies’ cars, then the whole column should be either in meters/sec scale or miles/sec scale.
Now, that we are clear on how to source and clean the data, let’s see how we can analyze the data.
If we analyze data over a single variable/column from a dataset, it is known as Univariate Analysis.
Categorical Unordered Univariate Analysis:
An unordered variable is a categorical variable that has no defined order. If we take our data as an example, the job column in the dataset is divided into many sub-categories like technician, blue-collar, services, management, etc. There is no weight or measure given to any value in the ‘job’ column.
Now, let’s analyze the job category by using plots. Since Job is a category, we will plot the bar plot.
# Let's calculate the percentage of each job status category.
data.job.value_counts(normalize=True)
#plot the bar graph of percentage job categories
data.job.value_counts(normalize=True).plot.barh()
plt.show()
The output looks like this,
By the above bar plot, we can infer that the data set contains more number of blue-collar workers compared to other categories.
Categorical Ordered Univariate Analysis:
Ordered variables are those variables that have a natural rank of order. Some examples of categorical ordered variables from our dataset are:
Now, let’s analyze the Education Variable from the dataset. Since we’ve already seen a bar plot, let’s see how a Pie Chart looks like.
#calculate the percentage of each education category.
data.education.value_counts(normalize=True)
#plot the pie chart of education categories
data.education.value_counts(normalize=True).plot.pie()
plt.show()
The output will be,
By the above analysis, we can infer that the data set has a large number of them belongs to secondary education after that tertiary and next primary. Also, a very small percentage of them have been unknown.
This is how we analyze univariate categorical analysis. If the column or variable is of numerical then we’ll analyze by calculating its mean, median, std, etc. We can get those values by using the describe function.
data.salary.describe()
The output will be,
If we analyze data by taking two variables/columns into consideration from a dataset, it is known as Bivariate Analysis.
a) Numeric-Numeric Analysis:
Analyzing the two numeric variables from a dataset is known as numeric-numeric analysis. We can analyze it in three different ways.
Scatter Plot
Let’s take three columns ‘Balance’, ‘Age’ and ‘Salary’ from our dataset and see what we can infer by plotting to scatter plot between salary
balance
and age
balance
#plot the scatter plot of balance and salary variable in data
plt.scatter(data.salary,data.balance)
plt.show()
#plot the scatter plot of balance and age variable in data
data.plot.scatter(x="age",y="balance")
plt.show()
Now, the scatter plots looks like,
Pair Plot
Now, let’s plot Pair Plots for the three columns we used in plotting Scatter plots. We’ll use the seaborn library for plotting Pair Plots.
#plot the pair plot of salary, balance and age in data dataframe.
sns.pairplot(data = data, vars=['salary','balance','age'])
plt.show()
The Pair Plot looks like this,
Correlation Matrix
Since we cannot use more than two variables as x-axis and y-axis in Scatter and Pair Plots, it is difficult to see the relation between three numerical variables in a single graph. In those cases, we’ll use the correlation matrix.
# Creating a matrix using age, salry, balance as rows and columns
data[['age','salary','balance']].corr()
#plot the correlation matrix of salary, balance and age in data dataframe.
sns.heatmap(data[['age','salary','balance']].corr(), annot=True, cmap = 'Reds')
plt.show()
First, we created a matrix using age, salary, and balance. After that, we are plotting the heatmap using the seaborn library of the matrix.
b) Numeric - Categorical Analysis
Analyzing the one numeric variable and one categorical variable from a dataset is known as numeric-categorical analysis. We analyze them mainly using mean, median, and box plots.
Let’s take salary
and response
columns from our dataset.
First check for mean value using groupby
#groupby the response to find the mean of the salary with response no & yes separately.
data.groupby('response')['salary'].mean()
The output will be,
There is not much of a difference between the yes and no response based on the salary.
Let’s calculate the median,
#groupby the response to find the median of the salary with response no & yes separately.
data.groupby('response')['salary'].median()
The output will be,
By both mean and median we can say that the response of yes and no remains the same irrespective of the person’s salary. But, is it truly behaving like that, let’s plot the box plot for them and check the behavior.
#plot the box plot of salary for yes & no responses.
sns.boxplot(data.response, data.salary)
plt.show()
The box plot looks like this,
As we can see, when we plot the Box Plot, it paints a very different picture compared to mean and median. The IQR for customers who gave a positive response is on the higher salary side.
This is how we analyze Numeric-Categorical variables, we use mean, median, and Box Plots to draw some sort of conclusions.
c) Categorical — Categorical Analysis
Since our target variable/column is the Response rate, we’ll see how the different categories like Education, Marital Status, etc., are associated with the Response column. So instead of ‘Yes’ and ‘No’ we will convert them into ‘1’ and ‘0’, by doing that we’ll get the “Response Rate”.
#create response_rate of numerical data type where response "yes"= 1, "no"= 0
data['response_rate'] = np.where(data.response=='yes',1,0)
data.response_rate.value_counts()
The output looks like this,
Let’s see how the response rate varies for different categories in marital status.
#plot the bar graph of marital status with average value of response_rate
data.groupby('marital')['response_rate'].mean().plot.bar()
plt.show()
The graph looks like this,
By the above graph, we can infer that the positive response is more for Single status members in the data set. Similarly, we can plot the graphs for Loan vs Response rate, Housing Loans vs Response rate, etc.
If we analyze data by taking more than two variables/columns into consideration from a dataset, it is known as Multivariate Analysis.
Let’s see how ‘Education’, ‘Marital’, and ‘Response_rate’ vary with each other.
First, we’ll create a pivot table with the three columns and after that, we’ll create a heatmap.
result = pd.pivot_table(data=data, index='education', columns='marital',values='response_rate')
print(result)
#create heat map of education vs marital vs response_rate
sns.heatmap(result, annot=True, cmap = 'RdYlGn', center=0.117)
plt.show()
The Pivot table and heatmap looks like this,
Based on the Heatmap we can infer that the married people with primary education are less likely to respond positively for the survey and single people with tertiary education are most likely to respond positively to the survey.
Similarly, we can plot the graphs for Job vs marital vs response, Education vs poutcome vs response, etc.
Conclusion
This is how we’ll do Exploratory Data Analysis. Exploratory Data Analysis (EDA) helps us to look beyond the data. The more we explore the data, the more the insights we draw from it. As a data analyst, almost 80% of our time will be spent understanding data and solving various business problems through EDA.
Thank you for reading and Happy Coding!!!
#dataanalysis #python
1660642800
Implement high quality audio filters with just a few lines of code and Novocaine, or your own audio library of choice.
NVDSP comes with a wide variety of audio filters:
To start out I recommend you to get a fresh copy of Novocaine and open Novocaine's excellent example project. Then import NVDSP and the Filters folder and start your filtering journey.
// ... import Novocaine here ...
#import "NVDSP/NVDSP.h"
#import "NVDSP/Filters/NVHighpassFilter.h"
// init Novocaine audioManager
audioManager = [Novocaine audioManager];
float samplingRate = audioManager.samplingRate;
// init fileReader which we will later fetch audio from
NSURL *inputFileURL = [[NSBundle mainBundle] URLForResource:@"Trentemoller-Miss-You" withExtension:@"mp3"];
fileReader = [[AudioFileReader alloc]
initWithAudioFileURL:inputFileURL
samplingRate:audioManager.samplingRate
numChannels:audioManager.numOutputChannels];
// setup Highpass filter
NVHighpassFilter *HPF;
HPF = [[NVHighpassFilter alloc] initWithSamplingRate:samplingRate];
HPF.cornerFrequency = 2000.0f;
HPF.Q = 0.5f;
// setup audio output block
[fileReader play];
[audioManager setOutputBlock:^(float *outData, UInt32 numFrames, UInt32 numChannels) {
[fileReader retrieveFreshAudio:outData numFrames:numFrames numChannels:numChannels];
[HPF filterData:outData numFrames:numFrames numChannels:numChannels];
}];
Note that NVDSP works with raw audio buffers, so it can also work with other libraries instead of Novocaine.
// import Novocaine.h and NVDSP.h
#import "NVDSP/Filter/NVPeakingEQFilter.h"
NVPeakingEQFilter *PEF = [[NVPeakingEQFilter alloc] initWithSamplingRate:audioManager.samplingRate];
PEF.centerFrequency = 1000.0f;
PEF.Q = 3.0f;
PEF.G = 20.0f;
[PEF filterData:data numFrames:numFrames numChannels:numChannels];
// import Novocaine.h and NVDSP.h
#import "NVDSP/Filter/NVLowpassFilter.h"
NVLowpassFilter *LPF = [[NVLowpassFilter alloc] initWithSamplingRate:audioManager.samplingRate];
LPF.cornerFrequency = 800.0f;
LPF.Q = 0.8f;
[LPF filterData:data numFrames:numFrames numChannels:numChannels];
// import Novocaine.h and NVDSP.h
#import "NVDSP/Filter/NVNotchFilter.h"
NVNotchFilter *NF = [[NVNotchFilter alloc] initWithSamplingRate:audioManager.samplingRate];
NF.centerFrequency = 3000.0f;
NF.Q = 0.8f;
[NF filterData:data numFrames:numFrames numChannels:numChannels];
There are two types of bandpass filters:
* 0 dB gain bandpass filter (NVBandpassFilter.h)
* Peak gain Q bandpass filter (NVBandpassQPeakGainFilter.h)
// import Novocaine.h and NVDSP.h
#import "NVDSP/Filter/NVBandpassFilter.h"
NVBandpassFilter *BPF = [[NVBandpassFilter alloc] initWithSamplingRate:audioManager.samplingRate];
BPF.centerFrequency = 2500.0f;
BPF.Q = 0.9f;
[BPF filterData:data numFrames:numFrames numChannels:numChannels];
// import Novocaine.h and NVDSP.h
#import "NVDSP/Utilities/NVSoundLevelMeter.h"
NVSoundLevelMeter *SLM = [[NVSoundLevelMeter alloc] init];
float dB = [SLM getdBLevel:outData numFrames:numFrames numChannels:numChannels];
NSLog(@"dB level: %f", dB);
// NSLogging in an output loop will most likely cause hickups/clicky noises, but it does log the dB level!
// To get a proper dB value, you have to call the getdBLevel method a few times (it has memory of previous values)
// You call this inside the input or outputBlock: [audioManager setOutputBlock:^...
All sample values (typically -1.0f .. 1.0f when not clipping) are multiplied by the gain value.
// import Novocaine.h and NVDSP.h
NVDSP *generalDSP = [[NVDSP alloc] init];
[generalDSP applyGain:outData length:numFrames*numChannels gain:0.8];
This converts a left and right buffer into a mono signal. It takes the average of the samples.
// Deinterleave stereo buffer into seperate left and right
float *left = (float *)malloc((numFrames + 2) * sizeof(float));
float *right = (float *)malloc((numFrames + 2) * sizeof(float));
[generalDSP deinterleave:data left:left right:right length:numFrames];
// Convert left and right to a mono 2 channel buffer
[generalDSP mono:data left:left right:right length:numFrames];
// Free buffers
free(left);
free(right);
Multiple peaking EQs with high gains can cause clipping. Clipping is basically sample data that exceeds the maximum or minimum value of 1.0f or -1.0f respectively. Clipping will cause really loud and dirty noises, like a bad overdrive effect. You can use the method counterClipping
to prevent clipping (it will reduce the sound level).
// import Novocaine.h and NVDSP.h
#import "NVDSP/Utilities/NVClippingDetection.h"
NVClippingDetection *CDT = [[NVClippingDetection alloc] init];
// ... possible clipped outData ...//
[CDT counterClipping:outData numFrames:numFrames numChannels:numChannels];
// ... outData is now safe ...//
// or get the amount of clipped samples:
- (float) getClippedSamples:(float *)data numFrames:(UInt32)numFrames numChannels:(UInt32)numChannels;
// or get the percentage of clipped samples:
- (float) getClippedPercentage:(float*)data numFrames:(UInt32)numFrames numChannels:(UInt32)numChannels;
// or get the maximum value of a clipped sample that was found
- (float) getClippingSample:(float *)data numFrames:(UInt32)numFrames numChannels:(UInt32)numChannels;
See /Examples/NVDSPExample
for a simple iOS XcodeProject example. Please note the Novocaine bundled with it might be outdated.
The NVDSP class is written in C++, so the classes that use it will have to be Objective-C++. Change all the files that use NVDSP from MyClass.m to MyClass.mm.
Author: bartolsthoorn
Source Code: https://github.com/bartolsthoorn/NVDSP
License: MIT license
1649042880
React native bridge for AppAuth - an SDK for communicating with OAuth2 providers
This versions supports react-native@0.63+
. The last pre-0.63 compatible version is v5.1.3
.
React Native bridge for AppAuth-iOS and AppAuth-Android SDKS for communicating with OAuth 2.0 and OpenID Connect providers.
This library should support any OAuth provider that implements the OAuth2 spec.
We only support the Authorization Code Flow.
AppAuth is a mature OAuth client implementation that follows the best practices set out in RFC 8252 - OAuth 2.0 for Native Apps including using SFAuthenticationSession
and SFSafariViewController
on iOS, and Custom Tabs on Android. WebView
s are explicitly not supported due to the security and usability reasons explained in Section 8.12 of RFC 8252.
AppAuth also supports the PKCE ("Pixy") extension to OAuth which was created to secure authorization codes in public clients when custom URI scheme redirects are used.
To learn more, read this short introduction to OAuth and PKCE on the Formidable blog.
See Usage for example configurations, and the included Example application for a working sample.
authorize
This is the main function to use for authentication. Invoking this function will do the whole login flow and returns the access token, refresh token and access token expiry date when successful, or it throws an error when not successful.
import { authorize } from 'react-native-app-auth';
const config = {
issuer: '<YOUR_ISSUER_URL>',
clientId: '<YOUR_CLIENT_ID>',
redirectUrl: '<YOUR_REDIRECT_URL>',
scopes: ['<YOUR_SCOPES_ARRAY>'],
};
const result = await authorize(config);
prefetchConfiguration
ANDROID This will prefetch the authorization service configuration. Invoking this function is optional and will speed up calls to authorize. This is only supported on Android.
import { prefetchConfiguration } from 'react-native-app-auth';
const config = {
warmAndPrefetchChrome: true,
issuer: '<YOUR_ISSUER_URL>',
clientId: '<YOUR_CLIENT_ID>',
redirectUrl: '<YOUR_REDIRECT_URL>',
scopes: ['<YOUR_SCOPES_ARRAY>'],
};
prefetchConfiguration(config);
This is your configuration object for the client. The config is passed into each of the methods with optional overrides.
string
) base URI of the authentication server. If no serviceConfiguration
(below) is provided, issuer is a mandatory field, so that the configuration can be fetched from the issuer's OIDC discovery endpoint.object
) you may manually configure token exchange endpoints in cases where the issuer does not support the OIDC discovery protocol, or simply to avoid an additional round trip to fetch the configuration. If no issuer
(above) is provided, the service configuration is mandatory.string
) REQUIRED fully formed url to the OAuth authorization endpointstring
) REQUIRED fully formed url to the OAuth token exchange endpointstring
) fully formed url to the OAuth token revocation endpoint. If you want to be able to revoke a token and no issuer
is specified, this field is mandatory.string
) fully formed url to your OAuth/OpenID Connect registration endpoint. Only necessary for servers that require client registration.string
) fully formed url to your OpenID Connect end session endpoint. If you want to be able to end a user's session and no issuer
is specified, this field is mandatory.string
) REQUIRED your client id on the auth serverstring
) client secret to pass to token exchange requests. :warning: Read more about client secretsstring
) REQUIRED the url that links back to your app with the auth codearray<string>
) the scopes for your token, e.g. ['email', 'offline_access']
.object
) additional parameters that will be passed in the authorization request. Must be string values! E.g. setting additionalParameters: { hello: 'world', foo: 'bar' }
would add hello=world&foo=bar
to the authorization request.string
) ANDROID Client Authentication Method. Can be either basic
(default) for Basic Authentication or post
for HTTP POST body Authenticationboolean
) ANDROID whether to allow requests over plain HTTP or with self-signed SSL certificates. :warning: Can be useful for testing against local server, should not be used in production. This setting has no effect on iOS; to enable insecure HTTP requests, add a NSExceptionAllowsInsecureHTTPLoads exception to your App Transport Security settings.object
) ANDROID you can specify custom headers to pass during authorize request and/or token request.{ [key: string]: value }
) headers to be passed during authorization request.{ [key: string]: value }
) headers to be passed during token retrieval request.{ [key: string]: value }
) headers to be passed during registration request.{ [key: string]: value }
) IOS you can specify additional headers to be passed for all authorize, refresh, and register requests.boolean
) (default: true) optionally allows not sending the nonce parameter, to support non-compliant providersboolean
) (default: true) optionally allows not sending the code_challenge parameter and skipping PKCE code verification, to support non-compliant providers.boolean
) (default: false) just return the authorization response, instead of automatically exchanging the authorization code. This is useful if this exchange needs to be done manually (not client-side)number
) configure the request timeout interval in seconds. This must be a positive number. The default values are 60 seconds on iOS and 15 seconds on Android.This is the result from the auth server:
string
) the access tokenstring
) the token expiration dateObject
) additional url parameters from the authorizationEndpoint response.Object
) additional url parameters from the tokenEndpoint response.string
) the id tokenstring
) the refresh tokenstring
) the token type, e.g. Bearerstring
]) the scopes the user has agreed to be grantedstring
) the authorization code (only if skipCodeExchange=true
)string
) the codeVerifier value used for the PKCE exchange (only if both skipCodeExchange=true
and usePKCE=true
)refresh
This method will refresh the accessToken using the refreshToken. Some auth providers will also give you a new refreshToken
import { refresh } from 'react-native-app-auth';
const config = {
issuer: '<YOUR_ISSUER_URL>',
clientId: '<YOUR_CLIENT_ID>',
redirectUrl: '<YOUR_REDIRECT_URL>',
scopes: ['<YOUR_SCOPES_ARRAY>'],
};
const result = await refresh(config, {
refreshToken: `<REFRESH_TOKEN>`,
});
revoke
This method will revoke a token. The tokenToRevoke can be either an accessToken or a refreshToken
import { revoke } from 'react-native-app-auth';
const config = {
issuer: '<YOUR_ISSUER_URL>',
clientId: '<YOUR_CLIENT_ID>',
redirectUrl: '<YOUR_REDIRECT_URL>',
scopes: ['<YOUR_SCOPES_ARRAY>'],
};
const result = await revoke(config, {
tokenToRevoke: `<TOKEN_TO_REVOKE>`,
includeBasicAuth: true,
sendClientId: true,
});
logout
This method will logout a user, as per the OpenID Connect RP Initiated Logout specification. It requires an idToken
, obtained after successfully authenticating with OpenID Connect, and a URL to redirect back after the logout has been performed.
import { logout } from 'react-native-app-auth';
const config = {
issuer: '<YOUR_ISSUER_URL>',
};
const result = await logout(config, {
idToken: '<ID_TOKEN>',
postLogoutRedirectUrl: '<POST_LOGOUT_URL>',
});
register
This will perform dynamic client registration on the given provider. If the provider supports dynamic client registration, it will generate a clientId
for you to use in subsequent calls to this library.
import { register } from 'react-native-app-auth';
const registerConfig = {
issuer: '<YOUR_ISSUER_URL>',
redirectUrls: ['<YOUR_REDIRECT_URL>', '<YOUR_OTHER_REDIRECT_URL>'],
};
const registerResult = await register(registerConfig);
string
) same as in authorization configobject
) same as in authorization configarray<string>
) REQUIRED specifies all of the redirect urls that your client will use for authenticationarray<string>
) an array that specifies which OAuth 2.0 response types your client will use. The default value is ['code']
array<string>
) an array that specifies which OAuth 2.0 grant types your client will use. The default value is ['authorization_code']
string
) requests a specific subject type for your clientstring
) specifies which clientAuthMethod
your client will use for authentication. The default value is 'client_secret_basic'
object
) additional parameters that will be passed in the registration request. Must be string values! E.g. setting additionalParameters: { hello: 'world', foo: 'bar' }
would add hello=world&foo=bar
to the authorization request.boolean
) ANDROID same as in authorization configobject
) ANDROID same as in authorization confignumber
) configure the request timeout interval in seconds. This must be a positive number. The default values are 60 seconds on iOS and 15 seconds on Android.This is the result from the auth server
string
) the assigned client idstring
) OPTIONAL date string of when the client id was issuedstring
) OPTIONAL the assigned client secretstring
) date string of when the client secret expires, which will be provided if clientSecret
is provided. If new Date(clientSecretExpiresAt).getTime() === 0
, then the secret never expiresstring
) OPTIONAL uri that can be used to perform subsequent operations on the registrationstring
) token that can be used at the endpoint given by registrationClientUri
to perform subsequent operations on the registration. Will be provided if registrationClientUri
is providednpm install react-native-app-auth --save
To setup the iOS project, you need to perform three steps:
Install native dependencies
This library depends on the native AppAuth-ios project. To keep the React Native library agnostic of your dependency management method, the native libraries are not distributed as part of the bridge.
AppAuth supports three options for dependency management.
cd ios
pod install
2. Carthage
With Carthage, add the following line to your Cartfile
:
github "openid/AppAuth-iOS" "master"
Then run carthage update --platform iOS
.
Drag and drop AppAuth.framework
from ios/Carthage/Build/iOS
under Frameworks
in Xcode
.
Add a copy files build step for AppAuth.framework
: open Build Phases on Xcode, add a new "Copy Files" phase, choose "Frameworks" as destination, add AppAuth.framework
and ensure "Code Sign on Copy" is checked.
3. Static Library
You can also use AppAuth-iOS as a static library. This requires linking the library and your project and including the headers. Suggested configuration:
AppAuth.xcodeproj
to your Workspace.AppAuth-iOS/Source
to your search paths of your target ("Build Settings -> "Header Search Paths").Register redirect URL scheme
If you intend to support iOS 10 and older, you need to define the supported redirect URL schemes in your Info.plist
as follows:
<key>CFBundleURLTypes</key>
<array>
<dict>
<key>CFBundleURLName</key>
<string>com.your.app.identifier</string>
<key>CFBundleURLSchemes</key>
<array>
<string>io.identityserver.demo</string>
</array>
</dict>
</array>
CFBundleURLName
is any globally unique string. A common practice is to use your app identifier.CFBundleURLSchemes
is an array of URL schemes your app needs to handle. The scheme is the beginning of your OAuth Redirect URL, up to the scheme separator (:
) character. E.g. if your redirect uri is com.myapp://oauth
, then the url scheme will is com.myapp
.Define openURL callback in AppDelegate
You need to retain the auth session, in order to continue the authorization flow from the redirect. Follow these steps:
RNAppAuth
will call on the given app's delegate via [UIApplication sharedApplication].delegate
. Furthermore, RNAppAuth
expects the delegate instance to conform to the protocol RNAppAuthAuthorizationFlowManager
. Make AppDelegate
conform to RNAppAuthAuthorizationFlowManager
with the following changes to AppDelegate.h
:
+ #import "RNAppAuthAuthorizationFlowManager.h"
- @interface AppDelegate : UIResponder <UIApplicationDelegate, RCTBridgeDelegate>
+ @interface AppDelegate : UIResponder <UIApplicationDelegate, RCTBridgeDelegate, RNAppAuthAuthorizationFlowManager>
+ @property(nonatomic, weak)id<RNAppAuthAuthorizationFlowManagerDelegate>authorizationFlowManagerDelegate;
Add the following code to AppDelegate.m
(to support iOS <= 10 and React Navigation deep linking)
+ - (BOOL)application:(UIApplication *)app openURL:(NSURL *)url options:(NSDictionary<NSString *, id> *) options {
+ if ([self.authorizationFlowManagerDelegate resumeExternalUserAgentFlowWithURL:url]) {
+ return YES;
+ }
+ return [RCTLinkingManager application:app openURL:url options:options];
+ }
If you want to support universal links, add the following to AppDelegate.m
under continueUserActivity
+ if ([userActivity.activityType isEqualToString:NSUserActivityTypeBrowsingWeb]) {
+ if (self.authorizationFlowManagerDelegate) {
+ BOOL resumableAuth = [self.authorizationFlowManagerDelegate resumeExternalUserAgentFlowWithURL:userActivity.webpageURL];
+ if (resumableAuth) {
+ return YES;
+ }
+ }
+ }
The approach mentioned should work with Swift. In this case one should make AppDelegate
conform to RNAppAuthAuthorizationFlowManager
. Note that this is not tested/guaranteed by the maintainers.
Steps:
swift-Bridging-Header.h
should include a reference to #import "RNAppAuthAuthorizationFlowManager.h
, like so:#import <React/RCTBundleURLProvider.h>
#import <React/RCTRootView.h>
#import <React/RCTBridgeDelegate.h>
#import <React/RCTBridge.h>
#import "RNAppAuthAuthorizationFlowManager.h" // <-- Add this header
#if DEBUG
#import <FlipperKit/FlipperClient.h>
// etc...
2. AppDelegate.swift
should implement the RNAppAuthorizationFlowManager
protocol and have a handler for url deep linking. The result should look something like this:
@UIApplicationMain
class AppDelegate: UIApplicationDelegate, RNAppAuthAuthorizationFlowManager { //<-- note the additional RNAppAuthAuthorizationFlowManager protocol
public weak var authorizationFlowManagerDelegate: RNAppAuthAuthorizationFlowManagerDelegate? // <-- this property is required by the protocol
//"open url" delegate function for managing deep linking needs to call the resumeExternalUserAgentFlowWithURL method
func application(
_ app: UIApplication,
open url: URL,
options: [UIApplicationOpenURLOptionsKey: Any] = [:]) -> Bool {
return authorizationFlowManagerDelegate?.resumeExternalUserAgentFlowWithURL(with: url) ?? false
}
}
Note: for RN >= 0.57, you will get a warning about compile being obsolete. To get rid of this warning, use patch-package to replace compile with implementation as in this PR - we're not deploying this right now, because it would break the build for RN < 57.
To setup the Android project, you need to add redirect scheme manifest placeholder:
To capture the authorization redirect, add the following property to the defaultConfig in android/app/build.gradle
:
android {
defaultConfig {
manifestPlaceholders = [
appAuthRedirectScheme: 'io.identityserver.demo'
]
}
}
The scheme is the beginning of your OAuth Redirect URL, up to the scheme separator (:
) character. E.g. if your redirect uri is com.myapp://oauth
, then the url scheme will is com.myapp
. The scheme must be in lowercase.
NOTE: When integrating with React Navigation deep linking, be sure to make this scheme (and the scheme in the config's redirectUrl) unique from the scheme defined in the deep linking intent-filter. E.g. if the scheme in your intent-filter is set to com.myapp
, then update the above scheme/redirectUrl to be com.myapp.auth
as seen here.
import { authorize } from 'react-native-app-auth';
// base config
const config = {
issuer: '<YOUR_ISSUER_URL>',
clientId: '<YOUR_CLIENT_ID>',
redirectUrl: '<YOUR_REDIRECT_URL>',
scopes: ['<YOUR_SCOPE_ARRAY>'],
};
// use the client to make the auth request and receive the authState
try {
const result = await authorize(config);
// result includes accessToken, accessTokenExpirationDate and refreshToken
} catch (error) {
console.log(error);
}
Values are in the code
field of the rejected Error object.
service_configuration_fetch_error
- could not fetch the service configurationauthentication_failed
- user authentication failedtoken_refresh_failed
- could not exchange the refresh token for a new JWTregistration_failed
- could not registerbrowser_not_found
(Android only) - no suitable browser installedSome authentication providers, including examples cited below, require you to provide a client secret. The authors of the AppAuth library
strongly recommend you avoid using static client secrets in your native applications whenever possible. Client secrets derived via a dynamic client registration are safe to use, but static client secrets can be easily extracted from your apps and allow others to impersonate your app and steal user data. If client secrets must be used by the OAuth2 provider you are integrating with, we strongly recommend performing the code exchange step on your backend, where the client secret can be kept hidden.
Having said this, in some cases using client secrets is unavoidable. In these cases, a clientSecret
parameter can be provided to authorize
/refresh
calls when performing a token request.
Recommendations on secure token storage can be found here.
Active: Formidable is actively working on this project, and we expect to continue for work for the foreseeable future. Bug reports, feature requests and pull requests are welcome.
These providers are OpenID compliant, which means you can use autodiscovery.
These providers implement the OAuth2 spec, but are not OpenID providers, which means you must configure the authorization and token endpoints yourself.
Download Details:
Author: FormidableLabs
Source Code: https://github.com/FormidableLabs/react-native-app-auth
License: MIT License
1561523460
This Matplotlib cheat sheet introduces you to the basics that you need to plot your data with Python and includes code samples.
Data visualization and storytelling with your data are essential skills that every data scientist needs to communicate insights gained from analyses effectively to any audience out there.
For most beginners, the first package that they use to get in touch with data visualization and storytelling is, naturally, Matplotlib: it is a Python 2D plotting library that enables users to make publication-quality figures. But, what might be even more convincing is the fact that other packages, such as Pandas, intend to build more plotting integration with Matplotlib as time goes on.
However, what might slow down beginners is the fact that this package is pretty extensive. There is so much that you can do with it and it might be hard to still keep a structure when you're learning how to work with Matplotlib.
DataCamp has created a Matplotlib cheat sheet for those who might already know how to use the package to their advantage to make beautiful plots in Python, but that still want to keep a one-page reference handy. Of course, for those who don't know how to work with Matplotlib, this might be the extra push be convinced and to finally get started with data visualization in Python.
You'll see that this cheat sheet presents you with the six basic steps that you can go through to make beautiful plots.
Check out the infographic by clicking on the button below:
With this handy reference, you'll familiarize yourself in no time with the basics of Matplotlib: you'll learn how you can prepare your data, create a new plot, use some basic plotting routines to your advantage, add customizations to your plots, and save, show and close the plots that you make.
What might have looked difficult before will definitely be more clear once you start using this cheat sheet! Use it in combination with the Matplotlib Gallery, the documentation.
Matplotlib
Matplotlib is a Python 2D plotting library which produces publication-quality figures in a variety of hardcopy formats and interactive environments across platforms.
>>> import numpy as np
>>> x = np.linspace(0, 10, 100)
>>> y = np.cos(x)
>>> z = np.sin(x)
>>> data = 2 * np.random.random((10, 10))
>>> data2 = 3 * np.random.random((10, 10))
>>> Y, X = np.mgrid[-3:3:100j, -3:3:100j]
>>> U = 1 X** 2 + Y
>>> V = 1 + X Y**2
>>> from matplotlib.cbook import get_sample_data
>>> img = np.load(get_sample_data('axes_grid/bivariate_normal.npy'))
>>> import matplotlib.pyplot as plt
>>> fig = plt.figure()
>>> fig2 = plt.figure(figsize=plt.figaspect(2.0))
>>> fig.add_axes()
>>> ax1 = fig.add_subplot(221) #row-col-num
>>> ax3 = fig.add_subplot(212)
>>> fig3, axes = plt.subplots(nrows=2,ncols=2)
>>> fig4, axes2 = plt.subplots(ncols=3)
>>> plt.savefig('foo.png') #Save figures
>>> plt.savefig('foo.png', transparent=True) #Save transparent figures
>>> plt.show()
>>> fig, ax = plt.subplots()
>>> lines = ax.plot(x,y) #Draw points with lines or markers connecting them
>>> ax.scatter(x,y) #Draw unconnected points, scaled or colored
>>> axes[0,0].bar([1,2,3],[3,4,5]) #Plot vertical rectangles (constant width)
>>> axes[1,0].barh([0.5,1,2.5],[0,1,2]) #Plot horiontal rectangles (constant height)
>>> axes[1,1].axhline(0.45) #Draw a horizontal line across axes
>>> axes[0,1].axvline(0.65) #Draw a vertical line across axes
>>> ax.fill(x,y,color='blue') #Draw filled polygons
>>> ax.fill_between(x,y,color='yellow') #Fill between y values and 0
>>> fig, ax = plt.subplots()
>>> im = ax.imshow(img, #Colormapped or RGB arrays
cmap= 'gist_earth',
interpolation= 'nearest',
vmin=-2,
vmax=2)
>>> axes2[0].pcolor(data2) #Pseudocolor plot of 2D array
>>> axes2[0].pcolormesh(data) #Pseudocolor plot of 2D array
>>> CS = plt.contour(Y,X,U) #Plot contours
>>> axes2[2].contourf(data1) #Plot filled contours
>>> axes2[2]= ax.clabel(CS) #Label a contour plot
>>> axes[0,1].arrow(0,0,0.5,0.5) #Add an arrow to the axes
>>> axes[1,1].quiver(y,z) #Plot a 2D field of arrows
>>> axes[0,1].streamplot(X,Y,U,V) #Plot a 2D field of arrows
>>> ax1.hist(y) #Plot a histogram
>>> ax3.boxplot(y) #Make a box and whisker plot
>>> ax3.violinplot(z) #Make a violin plot
y-axis
x-axis
The basic steps to creating plots with matplotlib are:
1 Prepare Data
2 Create Plot
3 Plot
4 Customized Plot
5 Save Plot
6 Show Plot
>>> import matplotlib.pyplot as plt
>>> x = [1,2,3,4] #Step 1
>>> y = [10,20,25,30]
>>> fig = plt.figure() #Step 2
>>> ax = fig.add_subplot(111) #Step 3
>>> ax.plot(x, y, color= 'lightblue', linewidth=3) #Step 3, 4
>>> ax.scatter([2,4,6],
[5,15,25],
color= 'darkgreen',
marker= '^' )
>>> ax.set_xlim(1, 6.5)
>>> plt.savefig('foo.png' ) #Step 5
>>> plt.show() #Step 6
>>> plt.cla() #Clear an axis
>>> plt.clf(). #Clear the entire figure
>>> plt.close(). #Close a window
>>> plt.plot(x, x, x, x**2, x, x** 3)
>>> ax.plot(x, y, alpha = 0.4)
>>> ax.plot(x, y, c= 'k')
>>> fig.colorbar(im, orientation= 'horizontal')
>>> im = ax.imshow(img,
cmap= 'seismic' )
>>> fig, ax = plt.subplots()
>>> ax.scatter(x,y,marker= ".")
>>> ax.plot(x,y,marker= "o")
>>> plt.plot(x,y,linewidth=4.0)
>>> plt.plot(x,y,ls= 'solid')
>>> plt.plot(x,y,ls= '--')
>>> plt.plot(x,y,'--' ,x**2,y**2,'-.' )
>>> plt.setp(lines,color= 'r',linewidth=4.0)
>>> ax.text(1,
-2.1,
'Example Graph',
style= 'italic' )
>>> ax.annotate("Sine",
xy=(8, 0),
xycoords= 'data',
xytext=(10.5, 0),
textcoords= 'data',
arrowprops=dict(arrowstyle= "->",
connectionstyle="arc3"),)
>>> plt.title(r '$sigma_i=15$', fontsize=20)
Limits & Autoscaling
>>> ax.margins(x=0.0,y=0.1) #Add padding to a plot
>>> ax.axis('equal') #Set the aspect ratio of the plot to 1
>>> ax.set(xlim=[0,10.5],ylim=[-1.5,1.5]) #Set limits for x-and y-axis
>>> ax.set_xlim(0,10.5) #Set limits for x-axis
Legends
>>> ax.set(title= 'An Example Axes', #Set a title and x-and y-axis labels
ylabel= 'Y-Axis',
xlabel= 'X-Axis')
>>> ax.legend(loc= 'best') #No overlapping plot elements
Ticks
>>> ax.xaxis.set(ticks=range(1,5), #Manually set x-ticks
ticklabels=[3,100, 12,"foo" ])
>>> ax.tick_params(axis= 'y', #Make y-ticks longer and go in and out
direction= 'inout',
length=10)
Subplot Spacing
>>> fig3.subplots_adjust(wspace=0.5, #Adjust the spacing between subplots
hspace=0.3,
left=0.125,
right=0.9,
top=0.9,
bottom=0.1)
>>> fig.tight_layout() #Fit subplot(s) in to the figure area
Axis Spines
>>> ax1.spines[ 'top'].set_visible(False) #Make the top axis line for a plot invisible
>>> ax1.spines['bottom' ].set_position(( 'outward',10)) #Move the bottom axis line outward
Have this Cheat Sheet at your fingertips
Original article source at https://www.datacamp.com
#matplotlib #cheatsheet #python
1655550000
PNChart
You can also find swift version at here https://github.com/kevinzhow/PNChart-Swift
A simple and beautiful chart lib with animation used in Piner and CoinsMan for iOS
PNChart works on iOS 7.0+ and is compatible with ARC projects. If you need support for iOS 6, use PNChart <= 0.8.1. Note that 0.8.2 supports iOS 8.0+ only, 0.8.3 and newer supports iOS 7.0+.
It depends on the following Apple frameworks, which should already be included with most Xcode templates:
You will need LLVM 3.0 or later in order to build PNChart.
CocoaPods is the recommended way to add PNChart to your project.
pod 'PNChart'
pod install
.#import "PNChart.h"
.#import "PNChart.h"
//For Line Chart
PNLineChart * lineChart = [[PNLineChart alloc] initWithFrame:CGRectMake(0, 135.0, SCREEN_WIDTH, 200.0)];
[lineChart setXLabels:@[@"SEP 1",@"SEP 2",@"SEP 3",@"SEP 4",@"SEP 5"]];
// Line Chart No.1
NSArray * data01Array = @[@60.1, @160.1, @126.4, @262.2, @186.2];
PNLineChartData *data01 = [PNLineChartData new];
data01.color = PNFreshGreen;
data01.itemCount = lineChart.xLabels.count;
data01.getData = ^(NSUInteger index) {
CGFloat yValue = [data01Array[index] floatValue];
return [PNLineChartDataItem dataItemWithY:yValue];
};
// Line Chart No.2
NSArray * data02Array = @[@20.1, @180.1, @26.4, @202.2, @126.2];
PNLineChartData *data02 = [PNLineChartData new];
data02.color = PNTwitterColor;
data02.itemCount = lineChart.xLabels.count;
data02.getData = ^(NSUInteger index) {
CGFloat yValue = [data02Array[index] floatValue];
return [PNLineChartDataItem dataItemWithY:yValue];
};
lineChart.chartData = @[data01, data02];
[lineChart strokeChart];
You can choose to show smooth lines.
lineChart.showSmoothLines = YES;
You can set different colors for the same PNLineChartData item. for instance you can use "color" red for values less than 50 and use purple for values greater than 150.
lineChartData.rangeColors = @[
[[PNLineChartColorRange alloc] initWithRange:NSMakeRange(10, 30) color:[UIColor redColor]],
[[PNLineChartColorRange alloc] initWithRange:NSMakeRange(100, 200) color:[UIColor purpleColor]]
];
#import "PNChart.h"
//For BarC hart
PNBarChart * barChart = [[PNBarChart alloc] initWithFrame:CGRectMake(0, 135.0, SCREEN_WIDTH, 200.0)];
[barChart setXLabels:@[@"SEP 1",@"SEP 2",@"SEP 3",@"SEP 4",@"SEP 5"]];
[barChart setYValues:@[@1, @10, @2, @6, @3]];
[barChart strokeChart];
#import "PNChart.h"
//For Circle Chart
PNCircleChart * circleChart = [[PNCircleChart alloc] initWithFrame:CGRectMake(0, 80.0, SCREEN_WIDTH, 100.0) total:[NSNumber numberWithInt:100] current:[NSNumber numberWithInt:60] clockwise:NO shadow:NO];
circleChart.backgroundColor = [UIColor clearColor];
[circleChart setStrokeColor:PNGreen];
[circleChart strokeChart];
# import "PNChart.h"
//For Pie Chart
NSArray *items = @[[PNPieChartDataItem dataItemWithValue:10 color:PNRed],
[PNPieChartDataItem dataItemWithValue:20 color:PNBlue description:@"WWDC"],
[PNPieChartDataItem dataItemWithValue:40 color:PNGreen description:@"GOOL I/O"],
];
PNPieChart *pieChart = [[PNPieChart alloc] initWithFrame:CGRectMake(40.0, 155.0, 240.0, 240.0) items:items];
pieChart.descriptionTextColor = [UIColor whiteColor];
pieChart.descriptionTextFont = [UIFont fontWithName:@"Avenir-Medium" size:14.0];
[pieChart strokeChart];
# import "PNChart.h"
//For Scatter Chart
PNScatterChart *scatterChart = [[PNScatterChart alloc] initWithFrame:CGRectMake(SCREEN_WIDTH /6.0 - 30, 135, 280, 200)];
[scatterChart setAxisXWithMinimumValue:20 andMaxValue:100 toTicks:6];
[scatterChart setAxisYWithMinimumValue:30 andMaxValue:50 toTicks:5];
NSArray * data01Array = [self randomSetOfObjects];
PNScatterChartData *data01 = [PNScatterChartData new];
data01.strokeColor = PNGreen;
data01.fillColor = PNFreshGreen;
data01.size = 2;
data01.itemCount = [[data01Array objectAtIndex:0] count];
data01.inflexionPointStyle = PNScatterChartPointStyleCircle;
__block NSMutableArray *XAr1 = [NSMutableArray arrayWithArray:[data01Array objectAtIndex:0]];
__block NSMutableArray *YAr1 = [NSMutableArray arrayWithArray:[data01Array objectAtIndex:1]];
data01.getData = ^(NSUInteger index) {
CGFloat xValue = [[XAr1 objectAtIndex:index] floatValue];
CGFloat yValue = [[YAr1 objectAtIndex:index] floatValue];
return [PNScatterChartDataItem dataItemWithX:xValue AndWithY:yValue];
};
[scatterChart setup];
self.scatterChart.chartData = @[data01];
/***
this is for drawing line to compare
CGPoint start = CGPointMake(20, 35);
CGPoint end = CGPointMake(80, 45);
[scatterChart drawLineFromPoint:start ToPoint:end WithLineWith:2 AndWithColor:PNBlack];
***/
scatterChart.delegate = self;
Legend has been added to PNChart for Line and Pie Charts. Legend items position can be stacked or in series.
#import "PNChart.h"
//For Line Chart
//Add Line Titles for the Legend
data01.dataTitle = @"Alpha";
data02.dataTitle = @"Beta Beta Beta Beta";
//Build the legend
self.lineChart.legendStyle = PNLegendItemStyleSerial;
UIView *legend = [self.lineChart getLegendWithMaxWidth:320];
//Move legend to the desired position and add to view
[legend setFrame:CGRectMake(100, 400, legend.frame.size.width, legend.frame.size.height)];
[self.view addSubview:legend];
//For Pie Chart
//Build the legend
self.pieChart.legendStyle = PNLegendItemStyleStacked;
UIView *legend = [self.pieChart getLegendWithMaxWidth:200];
//Move legend to the desired position and add to view
[legend setFrame:CGRectMake(130, 350, legend.frame.size.width, legend.frame.size.height)];
[self.view addSubview:legend];
Grid lines have been added to PNChart for Line Chart.
lineChart.showYGridLines = YES;
lineChart.yGridLinesColor = [UIColor grayColor];
Now it's easy to update value in real time
if ([self.title isEqualToString:@"Line Chart"]) {
// Line Chart #1
NSArray * data01Array = @[@(arc4random() % 300), @(arc4random() % 300), @(arc4random() % 300), @(arc4random() % 300), @(arc4random() % 300), @(arc4random() % 300), @(arc4random() % 300)];
PNLineChartData *data01 = [PNLineChartData new];
data01.color = PNFreshGreen;
data01.itemCount = data01Array.count;
data01.inflexionPointStyle = PNLineChartPointStyleTriangle;
data01.getData = ^(NSUInteger index) {
CGFloat yValue = [data01Array[index] floatValue];
return [PNLineChartDataItem dataItemWithY:yValue];
};
// Line Chart #2
NSArray * data02Array = @[@(arc4random() % 300), @(arc4random() % 300), @(arc4random() % 300), @(arc4random() % 300), @(arc4random() % 300), @(arc4random() % 300), @(arc4random() % 300)];
PNLineChartData *data02 = [PNLineChartData new];
data02.color = PNTwitterColor;
data02.itemCount = data02Array.count;
data02.inflexionPointStyle = PNLineChartPointStyleSquare;
data02.getData = ^(NSUInteger index) {
CGFloat yValue = [data02Array[index] floatValue];
return [PNLineChartDataItem dataItemWithY:yValue];
};
[self.lineChart setXLabels:@[@"DEC 1",@"DEC 2",@"DEC 3",@"DEC 4",@"DEC 5",@"DEC 6",@"DEC 7"]];
[self.lineChart updateChartData:@[data01, data02]];
}
else if ([self.title isEqualToString:@"Bar Chart"])
{
[self.barChart setXLabels:@[@"Jan 1",@"Jan 2",@"Jan 3",@"Jan 4",@"Jan 5",@"Jan 6",@"Jan 7"]];
[self.barChart updateChartData:@[@(arc4random() % 30),@(arc4random() % 30),@(arc4random() % 30),@(arc4random() % 30),@(arc4random() % 30),@(arc4random() % 30),@(arc4random() % 30)]];
}
else if ([self.title isEqualToString:@"Circle Chart"])
{
[self.circleChart updateChartByCurrent:@(arc4random() % 100)];
}
#import "PNChart.h"
//For LineChart
lineChart.delegate = self;
Animation is enabled by default when drawing all charts. It can be disabled by setting displayAnimation = NO
.
#import "PNChart.h"
//For LineChart
lineChart.displayAnimation = NO;
//For DelegateMethod
-(void)userClickedOnLineKeyPoint:(CGPoint)point lineIndex:(NSInteger)lineIndex pointIndex:(NSInteger)pointIndex{
NSLog(@"Click Key on line %f, %f line index is %d and point index is %d",point.x, point.y,(int)lineIndex, (int)pointIndex);
}
-(void)userClickedOnLinePoint:(CGPoint)point lineIndex:(NSInteger)lineIndex{
NSLog(@"Click on line %f, %f, line index is %d",point.x, point.y, (int)lineIndex);
}
Author: kevinzhow
Source Code: https://github.com/kevinzhow/PNChart
License: MIT license