1602812272
Starter Project for todays session: https://github.com/drakosi99/imessge-starter-project
In todays video David will be transforming the iMessage Clone to a fully functioning MERN stack build.
Let’s Gooooo
#mongodb #node #express #react #javascript
1678051620
In this article, learn about Machine Learning Tutorial: A Practical Guide of Unsupervised Learning Algorithms. Machine learning is a fast-growing technology that allows computers to learn from the past and predict the future. It uses numerous algorithms for building mathematical models and predicting future trends. Machine learning (ML) has widespread applications in the industry, including speech recognition, image recognition, churn prediction, email filtering, chatbot development, recommender systems, and much more.
Machine learning (ML) can be classified into three main categories; supervised, unsupervised, and reinforcement learning. In supervised learning, the model is trained on labeled data. While in unsupervised learning, unlabeled data is provided to the model to predict the outcomes. Reinforcement learning is feedback learning in which the agent collects a reward for each correct action and gets a penalty for a wrong decision. The goal of the learning agent is to get maximum reward points and deduce the error.
In unsupervised learning, the model learns from unlabeled data without proper supervision.
Unsupervised learning uses machine learning techniques to cluster unlabeled data based on similarities and differences. The unsupervised algorithms discover hidden patterns in data without human supervision. Unsupervised learning aims to arrange the raw data into new features or groups together with similar patterns of data.
For instance, to predict the churn rate, we provide unlabeled data to our model for prediction. There is no information given that customers have churned or not. The model will analyze the data and find hidden patterns to categorize into two clusters: churned and non-churned customers.
Unsupervised algorithms can be used for three tasks—clustering, dimensionality reduction, and association. Below, we will highlight some commonly used clustering and association algorithms.
Clustering, or cluster analysis, is a popular data mining technique for unsupervised learning. The clustering approach works to group non-labeled data based on similarities and differences. Unlike supervised learning, clustering algorithms discover natural groupings in data.
A good clustering method produces high-quality clusters having high intra-class similarity (similar data within a cluster) and less intra-class similarity (cluster data is dissimilar to other clusters).
It can be defined as the grouping of data points into various clusters containing similar data points. The same objects remain in the group that has fewer similarities with other groups. Here, we will discuss two popular clustering techniques: K-Means clustering and DBScan Clustering.
K-Means is the simplest unsupervised technique used to solve clustering problems. It groups the unlabeled data into various clusters. The K value defines the number of clusters you need to tell the system how many to create.
K-Means is a centroid-based algorithm in which each cluster is associated with the centroid. The goal is to minimize the sum of the distances between the data points and their corresponding clusters.
It is an iterative approach that breaks down the unlabeled data into different clusters so that each data point belongs to a group with similar characteristics.
K-means clustering performs two tasks:
An illustration of K-means clustering. Image source
“DBScan” stands for “Density-based spatial clustering of applications with noise.” There are three main words in DBscan: density, clustering, and noise. Therefore, this algorithm uses the notion of density-based clustering to form clusters and detect the noise.
Clusters are usually dense regions that are separated by lower density regions. Unlike the k-means algorithm, which works only on well-separated clusters, DBscan has a wider scope and can create clusters within the cluster. It discovers clusters of various shapes and sizes from a large set of data, which consists of noise and outliers.
There are two parameters in the DBScan algorithm:
minPts: The threshold, or the minimum number of points grouped together for a region considered as a dense region.
eps(ε): The distance measure used to locate the points in the neighborhood.
An illustration of density-based clustering. Image Source
An association rule mining is a popular data mining technique. It finds interesting correlations in large numbers of data items. This rule shows how frequently items occur in a transaction.
Market Basket Analysis is a typical example of an association rule mining that finds relationships between items in the grocery store. It enables retailers to identify and analyze the associations between items that people frequently buy.
Important terminology used in association rules:
Support: It tells us about the combination of items bought frequently or frequently bought items.
Confidence: It tells us how often the items A and B occur together, given the number of times A occurs.
Lift: The lift indicates the strength of a rule over the random occurrence of A and B. For instance, A->B, the life value is 5. It means that if you buy A, the occurrence of buying B is five times.
The Apriori algorithm is a well-known association rule based technique.
The Apriori algorithm was proposed by R. Agarwal and R. Srikant in 1994 to find the frequent items in the dataset. The algorithm’s name is based on the fact that it uses prior knowledge of frequently occurring things.
The Apriori algorithm finds frequently occurring items with minimum support.
It consists of two steps:
In this tutorial, you will learn about the implementation of various unsupervised algorithms in Python. Scikit-learn is a powerful Python library widely used for various unsupervised learning tasks. It is an open-source library that provides numerous robust algorithms, which include classification, dimensionality reduction, clustering techniques, and association rules.
Let’s begin!
Now let’s dive deep into the implementation of the K-Means algorithm in Python. We’ll break down each code snippet so that you can understand it easily.
First of all, we will import the required libraries and get access to the functions.
#Let's import the required libraries
import numpy as np
import pandas as pd
import matplotlib.pyplot as plt
import seaborn as sns
The dataset is taken from the kaggle website. You can easily download it from the given link. To load the dataset, we use the pd.read_csv() function. head() returns the first five rows of the dataset.
my_data = pd.read_csv('Customers_Mall.csv.')
my_data.head()
The dataset contains five columns: customer ID, gender, age, annual income in (K$), and spending score from 1-100.
The info() function is used to get quick information about the dataset. It shows the number of entries, columns, total non-null values, memory usage, and datatypes.
my_data.info()
To check the missing values in the dataset, we use isnull().sum(), which returns the total number of null values.
#Check missing values
my_data.isnull().sum()
The box plot or whisker plot is used to detect outliers in the dataset. It also shows a statistical five number summary, which includes the minimum, first quartile, median (2nd quartile), third quartile, and maximum.
my_data.boxplot(figsize=(8,4))
Using Box Plot, we’ve detected an outlier in the annual income column. Now we will try to remove it before training our model.
#let's remove outlier from data
med =61
my_data["Annual Income (k$)"] = np.where(my_data["Annual Income (k$)"] >
120,med,my_data['Annual Income (k$)'])
The outlier in the annual income column has been removed now to confirm we used the box plot again.
my_data.boxplot(figsize=(8,5))
A histogram is used to illustrate the important features of the distribution of data. The hist() function is used to show the distribution of data in each numerical column.
my_data.hist(figsize=(6,6))
The correlation heatmap is used to find the potential relationships between variables in the data and to display the strength of those relationships. To display the heatmap, we have used the seaborn plotting library.
plt.figure(figsize=(10,6))
sns.heatmap(my_data.corr(), annot=True, cmap='icefire').set_title('seaborn')
plt.show()
The iloc() function is used to select a particular cell of the data. It enables us to select a value that belongs to a specific row or column. Here, we’ve chosen the annual income and spending score columns.
X_val = my_data.iloc[:, 3:].values
X_val
# Loading Kmeans Library
from sklearn.cluster import KMeans
Now we will select the best value for K using the Elbow’s method. It is used to determine the optimal number of clusters in K-means clustering.
my_val = []
for i in range(1,11):
kmeans = KMeans(n_clusters = i, init='k-means++', random_state = 123)
kmeans.fit(X_val)
my_val.append(kmeans.inertia_)
The sklearn.cluster.KMeans() is used to choose the number of clusters along with the initialization of other parameters. To display the result, just call the variable.
my_val
#Visualization of clusters using elbow’s method
plt.plot(range(1,11),my_val)
plt.xlabel('The No of clusters')
plt.ylabel('Outcome')
plt.title('The Elbow Method')
plt.show()
Through Elbow’s Method, when the graph looks like an arm, then the elbow on the arm is the best value of K. In this case, we’ve taken K=3, which is the optimal value for K.
kmeans = KMeans(n_clusters = 3, init='k-means++')
kmeans.fit(X_val)
#To show centroids of clusters
kmeans.cluster_centers_
#Prediction of K-Means clustering
y_kmeans = kmeans.fit_predict(X_val)
y_kmeans
The scatter graph is used to plot the classification results of our dataset into three clusters.
plt.scatter(X_val[y_kmeans == 0,0], X_val[y_kmeans == 0,1], c='red',s=100)
plt.scatter(X_val[y_kmeans == 1,0], X_val[y_kmeans == 1,1], c='green',s=100)
plt.scatter(X_val[y_kmeans == 2,0], X_val[y_kmeans == 2,1], c='orange',s=100)
plt.scatter(kmeans.cluster_centers_[:,0], kmeans.cluster_centers_[:,1], s=300, c='brown')
plt.title('K-Means Unsupervised Learning')
plt.show()
To implement the apriori algorithm, we will utilize “The Bread Basket” dataset. The dataset is available on Kaggle and you can download it from the link. This algorithm suggests products based on the user’s purchase history. Walmart has greatly utilized the algorithm to recommend relevant items to its users.
Let’s implement the Apriori algorithm in Python.
To implement the algorithm, we need to import some important libraries.
import pandas as pd
import matplotlib.pyplot as plt
import numpy as np
import seaborn as sns
The dataset contains five columns and 20507 entries. The data_time is a prominent column and we can extract many vital insights from it.
my_data= pd.read_csv("bread basket.csv")
my_data.head()
Convert the data_time into an appropriate format.
my_data['date_time'] = pd.to_datetime(my_data['date_time'])
#Total No of unique customers
my_data['Transaction'].nunique()
Now we want to extract new columns from the data_time to extract meaningful information from the data.
#Let's extract date
my_data['date'] = my_data['date_time'].dt.date
#Let's extract time
my_data['time'] = my_data['date_time'].dt.time
#Extract month and replacing it with String
my_data['month'] = my_data['date_time'].dt.month
my_data['month'] = my_data['month'].replace((1,2,3,4,5,6,7,8,9,10,11,12),
('Jan','Feb','Mar','Apr','May','Jun','Jul','Aug',
'Sep','Oct','Nov','Dec'))
#Extract hour
my_data[‘hour’] = my_data[‘date_time’].dt.hour
# Replacing hours with text
# Replacing hours with text
hr_num = (1,7,8,9,10,11,12,13,14,15,16,17,18,19,20,21,22,23)
hr_obj = (‘1-2′,’7-8′,’8-9′,’9-10′,’10-11′,’11-12′,’12-13′,’13-14′,’14-15’,
’15-16′,’16-17′,’17-18′,’18-19′,’19-20′,’20-21′,’21-22′,’22-23′,’23-24′)
my_data[‘hour’] = my_data[‘hour’].replace(hr_num, hr_obj)
# Extracting weekday and replacing it with String
my_data[‘weekday’] = my_data[‘date_time’].dt.weekday
my_data[‘weekday’] = my_data[‘weekday’].replace((0,1,2,3,4,5,6),
(‘Mon’,’Tues’,’Wed’,’Thur’,’Fri’,’Sat’,’Sun’))
#Now drop date_time column
my_data.drop(‘date_time’, axis = 1, inplace = True)
After extracting the date, time, month, and hour columns, we dropped the data_time column.
Now to display, we simply use the head() function to see the changes in the dataset.
my_data.head()
# cleaning the item column
my_data[‘Item’] = my_data[‘Item’].str.strip()
my_data[‘Item’] = my_data[‘Item’].str.lower()
my_data.head()
To display the top 10 items purchased by customers, we used a barplot() of the seaborn library.
plt.figure(figsize=(10,5))
sns.barplot(x=my_data.Item.value_counts().head(10).index, y=my_data.Item.value_counts().head(10).values,palette='RdYlGn')
plt.xlabel('No of Items', size = 17)
plt.xticks(rotation=45)
plt.ylabel('Total Items', size = 18)
plt.title('Top 10 Items purchased', color = 'blue', size = 23)
plt.show()
From the graph, coffee is the top item purchased by the customers, followed by bread.
Now, to display the number of orders received each month, the groupby() function is used along with barplot() to visually show the results.
mon_Tran =my_data.groupby('month')['Transaction'].count().reset_index()
mon_Tran.loc[:,"mon_order"] =[4,8,12,2,1,7,6,3,5,11,10,9]
mon_Tran.sort_values("mon_order",inplace=True)
plt.figure(figsize=(12,5))
sns.barplot(data = mon_Tran, x = "month", y = "Transaction")
plt.xlabel('Months', size = 14)
plt.ylabel('Monthly Orders', size = 14)
plt.title('No of orders received each month', color = 'blue', size = 18)
plt.show()
To show the number of orders received each day, we applied groupby() to the weekday column.
wk_Tran = my_data.groupby('weekday')['Transaction'].count().reset_index()
wk_Tran.loc[:,"wk_ord"] = [4,0,5,6,3,1,2]
wk_Tran.sort_values("wk_ord",inplace=True)
plt.figure(figsize=(11,4))
sns.barplot(data = wk_Tran, x = "weekday", y = "Transaction",palette='RdYlGn')
plt.xlabel('Week Day', size = 14)
plt.ylabel('Per day orders', size = 14)
plt.title('Orders received per day', color = 'blue', size = 18)
plt.show()
We import the mlxtend library to implement the association rules and count the number of items.
from mlxtend.frequent_patterns import association_rules, apriori
tran_str= my_data.groupby(['Transaction', 'Item'])['Item'].count().reset_index(name ='Count')
tran_str.head(8)
Now we’ll make a mxn matrix where m=transaction and n=items, and each row represents whether the item was in the transaction or not.
Mar_baskt = tran_str.pivot_table(index='Transaction', columns='Item', values='Count', aggfunc='sum').fillna(0)
Mar_baskt.head()
We want to make a function that returns 0 and 1. 0 means that the item wasn’t present in the transaction, while 1 means the item exists.
def encode(val):
if val<=0:
return 0
if val>=1:
return 1
#Let's apply the function to the dataset
Basket=Mar_baskt.applymap(encode)
Basket.head()
#using apriori algorithm to set min_support 0.01 means 1% freq_items = apriori(Basket, min_support = 0.01,use_colnames = True) freq_items.head()
Using the association_rules() function to generate the most frequent items from the dataset.
App_rule= association_rules(freq_items, metric = "lift", min_threshold = 1)
App_rule.sort_values('confidence', ascending = False, inplace = True)
App_rule.head()
From the above implementation, the most frequent items are coffee and toast, both having a lift value of 1.47 and a confidence value of 0.70.
Principal component analysis (PCA) is one of the most widely used unsupervised learning techniques. It can be used for various tasks, including dimensionality reduction, information compression, exploratory data analysis and Data de-noising.
Let’s use the PCA algorithm!
First we import the required libraries to implement this algorithm.
import numpy as np
import pandas as pd
import matplotlib.pyplot as plt
import seaborn as sns
%matplotlib inline
from sklearn.decomposition import PCA
from sklearn.datasets import load_digits
To implement the PCA algorithm the load_digits dataset of Scikit-learn is used which can easily be loaded using the below command. The dataset contains images data which include 1797 entries and 64 columns.
#Load the dataset
my_data= load_digits()
#Creating features
X_value = my_data.data
#Creating target
#Let's check the shape of X_value
X_value.shape
#Each image is 8x8 pixels therefore 64px
my_data.images[10]
#Let's display the image
plt.gray()
plt.matshow(my_data.images[34])
plt.show()
Now let’s project data from 64 columns to 16 to show how 16 dimensions classify the data.
X_val = my_data.data
y_val = my_data.target
my_pca = PCA(16)
X_projection = my_pca.fit_transform(X_val)
print(X_val.shape)
print(X_projection.shape)
Using colormap we visualize that with only ten dimensions we can classify the data points. Now we’ll select the optimal number of dimensions (principal components) by which data can be reduced into lower dimensions.
plt.scatter(X_projection[:, 0], X_projection[:, 1], c=y_val, edgecolor='white',
cmap=plt.cm.get_cmap("gist_heat",12))
plt.colorbar();
pca=PCA().fit(X_val)
plt.plot(np.cumsum(my_pca.explained_variance_ratio_))
plt.xlabel('Principal components')
plt.ylabel('Explained variance')
Based on the below graph, only 12 components are required to explain more than 80% of the variance which is still better than computing all the 64 features. Thus, we’ve reduced the large number of dimensions into 12 dimensions to avoid the dimensionality curse. pca=PCA().fit(X_val)
plt.plot(np.cumsum(pca.explained_variance_ratio_))
plt.xlabel('Principal components')
plt.ylabel('Explained variance')
#Let's visualize how it looks like
Unsupervised_pca = PCA(12)
X_pro = Unsupervised_pca.fit_transform(X_val)
print("New Data Shape is =>",X_pro.shape)
#Let's Create a scatter plot
plt.scatter(X_pro[:, 0], X_pro[:, 1], c=y_val, edgecolor='white',
cmap=plt.cm.get_cmap("nipy_spectral",10))
plt.colorbar();
In this machine learning tutorial, we’ve implemented the Kmeans, Apriori, and PCA algorithms. These are some of the most widely used algorithms, having numerous industrial applications and solve many real world problems. For instance, K-means clustering is used in astronomy to study stellar and galaxy spectra, solar polarization spectra, and X-ray spectra. And, Apriori is used by retail stores to optimize their product inventory.
Dreaming of becoming a data scientist or data analyst even without a university and a college degree? Do you need the knowledge of data science and analysis for promotions in your current role?
Are you interested in securing your dream job in data science and analysis and looking for a way to get started, we can help you? With over 10 years of experience in data science and data analysis, we will teach you the rubrics, guiding you with one-on-one lessons from the fundamentals until you become a pro.
Our courses are affordable and easy to understand with numerous exercises and assignments you can learn from. At the completion of our courses, you’ll be readily equipped with technical and practical skills to take on any data science and data analysis role in companies, collaborate effectively among teams and help businesses meet and exceed their objectives by extracting actionable insights from data.
Original article sourced at: https://thedatascientist.com
1656151740
Flutter Console Coverage Test
This small dart tools is used to generate Flutter Coverage Test report to console
Add a line like this to your package's pubspec.yaml (and run an implicit flutter pub get):
dev_dependencies:
test_cov_console: ^0.2.2
flutter pub get
Running "flutter pub get" in coverage... 0.5s
flutter test --coverage
00:02 +1: All tests passed!
flutter pub run test_cov_console
---------------------------------------------|---------|---------|---------|-------------------|
File |% Branch | % Funcs | % Lines | Uncovered Line #s |
---------------------------------------------|---------|---------|---------|-------------------|
lib/src/ | | | | |
print_cov.dart | 100.00 | 100.00 | 88.37 |...,149,205,206,207|
print_cov_constants.dart | 0.00 | 0.00 | 0.00 | no unit testing|
lib/ | | | | |
test_cov_console.dart | 0.00 | 0.00 | 0.00 | no unit testing|
---------------------------------------------|---------|---------|---------|-------------------|
All files with unit testing | 100.00 | 100.00 | 88.37 | |
---------------------------------------------|---------|---------|---------|-------------------|
If not given a FILE, "coverage/lcov.info" will be used.
-f, --file=<FILE> The target lcov.info file to be reported
-e, --exclude=<STRING1,STRING2,...> A list of contains string for files without unit testing
to be excluded from report
-l, --line It will print Lines & Uncovered Lines only
Branch & Functions coverage percentage will not be printed
-i, --ignore It will not print any file without unit testing
-m, --multi Report from multiple lcov.info files
-c, --csv Output to CSV file
-o, --output=<CSV-FILE> Full path of output CSV file
If not given, "coverage/test_cov_console.csv" will be used
-t, --total Print only the total coverage
Note: it will ignore all other option (if any), except -m
-p, --pass=<MINIMUM> Print only the whether total coverage is passed MINIMUM value or not
If the value >= MINIMUM, it will print PASSED, otherwise FAILED
Note: it will ignore all other option (if any), except -m
-h, --help Show this help
flutter pub run test_cov_console --file=coverage/lcov.info --exclude=_constants,_mock
---------------------------------------------|---------|---------|---------|-------------------|
File |% Branch | % Funcs | % Lines | Uncovered Line #s |
---------------------------------------------|---------|---------|---------|-------------------|
lib/src/ | | | | |
print_cov.dart | 100.00 | 100.00 | 88.37 |...,149,205,206,207|
lib/ | | | | |
test_cov_console.dart | 0.00 | 0.00 | 0.00 | no unit testing|
---------------------------------------------|---------|---------|---------|-------------------|
All files with unit testing | 100.00 | 100.00 | 88.37 | |
---------------------------------------------|---------|---------|---------|-------------------|
It support to run for multiple lcov.info files with the followings directory structures:
1. No root module
<root>/<module_a>
<root>/<module_a>/coverage/lcov.info
<root>/<module_a>/lib/src
<root>/<module_b>
<root>/<module_b>/coverage/lcov.info
<root>/<module_b>/lib/src
...
2. With root module
<root>/coverage/lcov.info
<root>/lib/src
<root>/<module_a>
<root>/<module_a>/coverage/lcov.info
<root>/<module_a>/lib/src
<root>/<module_b>
<root>/<module_b>/coverage/lcov.info
<root>/<module_b>/lib/src
...
You must run test_cov_console on <root> dir, and the report would be grouped by module, here is
the sample output for directory structure 'with root module':
flutter pub run test_cov_console --file=coverage/lcov.info --exclude=_constants,_mock --multi
---------------------------------------------|---------|---------|---------|-------------------|
File |% Branch | % Funcs | % Lines | Uncovered Line #s |
---------------------------------------------|---------|---------|---------|-------------------|
lib/src/ | | | | |
print_cov.dart | 100.00 | 100.00 | 88.37 |...,149,205,206,207|
lib/ | | | | |
test_cov_console.dart | 0.00 | 0.00 | 0.00 | no unit testing|
---------------------------------------------|---------|---------|---------|-------------------|
All files with unit testing | 100.00 | 100.00 | 88.37 | |
---------------------------------------------|---------|---------|---------|-------------------|
---------------------------------------------|---------|---------|---------|-------------------|
File - module_a - |% Branch | % Funcs | % Lines | Uncovered Line #s |
---------------------------------------------|---------|---------|---------|-------------------|
lib/src/ | | | | |
print_cov.dart | 100.00 | 100.00 | 88.37 |...,149,205,206,207|
lib/ | | | | |
test_cov_console.dart | 0.00 | 0.00 | 0.00 | no unit testing|
---------------------------------------------|---------|---------|---------|-------------------|
All files with unit testing | 100.00 | 100.00 | 88.37 | |
---------------------------------------------|---------|---------|---------|-------------------|
---------------------------------------------|---------|---------|---------|-------------------|
File - module_b - |% Branch | % Funcs | % Lines | Uncovered Line #s |
---------------------------------------------|---------|---------|---------|-------------------|
lib/src/ | | | | |
print_cov.dart | 100.00 | 100.00 | 88.37 |...,149,205,206,207|
lib/ | | | | |
test_cov_console.dart | 0.00 | 0.00 | 0.00 | no unit testing|
---------------------------------------------|---------|---------|---------|-------------------|
All files with unit testing | 100.00 | 100.00 | 88.37 | |
---------------------------------------------|---------|---------|---------|-------------------|
flutter pub run test_cov_console -c --output=coverage/test_coverage.csv
#### sample CSV output file:
File,% Branch,% Funcs,% Lines,Uncovered Line #s
lib/,,,,
test_cov_console.dart,0.00,0.00,0.00,no unit testing
lib/src/,,,,
parser.dart,100.00,100.00,97.22,"97"
parser_constants.dart,100.00,100.00,100.00,""
print_cov.dart,100.00,100.00,82.91,"29,49,51,52,171,174,177,180,183,184,185,186,187,188,279,324,325,387,388,389,390,391,392,393,394,395,398"
print_cov_constants.dart,0.00,0.00,0.00,no unit testing
All files with unit testing,100.00,100.00,86.07,""
You can install the package from the command line:
dart pub global activate test_cov_console
The package has the following executables:
$ test_cov_console
Run this command:
With Dart:
$ dart pub add test_cov_console
With Flutter:
$ flutter pub add test_cov_console
This will add a line like this to your package's pubspec.yaml (and run an implicit dart pub get
):
dependencies:
test_cov_console: ^0.2.2
Alternatively, your editor might support dart pub get
or flutter pub get
. Check the docs for your editor to learn more.
Now in your Dart code, you can use:
import 'package:test_cov_console/test_cov_console.dart';
example/lib/main.dart
import 'package:flutter/material.dart';
void main() {
runApp(MyApp());
}
class MyApp extends StatelessWidget {
// This widget is the root of your application.
@override
Widget build(BuildContext context) {
return MaterialApp(
title: 'Flutter Demo',
theme: ThemeData(
// This is the theme of your application.
//
// Try running your application with "flutter run". You'll see the
// application has a blue toolbar. Then, without quitting the app, try
// changing the primarySwatch below to Colors.green and then invoke
// "hot reload" (press "r" in the console where you ran "flutter run",
// or simply save your changes to "hot reload" in a Flutter IDE).
// Notice that the counter didn't reset back to zero; the application
// is not restarted.
primarySwatch: Colors.blue,
// This makes the visual density adapt to the platform that you run
// the app on. For desktop platforms, the controls will be smaller and
// closer together (more dense) than on mobile platforms.
visualDensity: VisualDensity.adaptivePlatformDensity,
),
home: MyHomePage(title: 'Flutter Demo Home Page'),
);
}
}
class MyHomePage extends StatefulWidget {
MyHomePage({Key? key, required this.title}) : super(key: key);
// This widget is the home page of your application. It is stateful, meaning
// that it has a State object (defined below) that contains fields that affect
// how it looks.
// This class is the configuration for the state. It holds the values (in this
// case the title) provided by the parent (in this case the App widget) and
// used by the build method of the State. Fields in a Widget subclass are
// always marked "final".
final String title;
@override
_MyHomePageState createState() => _MyHomePageState();
}
class _MyHomePageState extends State<MyHomePage> {
int _counter = 0;
void _incrementCounter() {
setState(() {
// This call to setState tells the Flutter framework that something has
// changed in this State, which causes it to rerun the build method below
// so that the display can reflect the updated values. If we changed
// _counter without calling setState(), then the build method would not be
// called again, and so nothing would appear to happen.
_counter++;
});
}
@override
Widget build(BuildContext context) {
// This method is rerun every time setState is called, for instance as done
// by the _incrementCounter method above.
//
// The Flutter framework has been optimized to make rerunning build methods
// fast, so that you can just rebuild anything that needs updating rather
// than having to individually change instances of widgets.
return Scaffold(
appBar: AppBar(
// Here we take the value from the MyHomePage object that was created by
// the App.build method, and use it to set our appbar title.
title: Text(widget.title),
),
body: Center(
// Center is a layout widget. It takes a single child and positions it
// in the middle of the parent.
child: Column(
// Column is also a layout widget. It takes a list of children and
// arranges them vertically. By default, it sizes itself to fit its
// children horizontally, and tries to be as tall as its parent.
//
// Invoke "debug painting" (press "p" in the console, choose the
// "Toggle Debug Paint" action from the Flutter Inspector in Android
// Studio, or the "Toggle Debug Paint" command in Visual Studio Code)
// to see the wireframe for each widget.
//
// Column has various properties to control how it sizes itself and
// how it positions its children. Here we use mainAxisAlignment to
// center the children vertically; the main axis here is the vertical
// axis because Columns are vertical (the cross axis would be
// horizontal).
mainAxisAlignment: MainAxisAlignment.center,
children: <Widget>[
Text(
'You have pushed the button this many times:',
),
Text(
'$_counter',
style: Theme.of(context).textTheme.headline4,
),
],
),
),
floatingActionButton: FloatingActionButton(
onPressed: _incrementCounter,
tooltip: 'Increment',
child: Icon(Icons.add),
), // This trailing comma makes auto-formatting nicer for build methods.
);
}
}
Author: DigitalKatalis
Source Code: https://github.com/DigitalKatalis/test_cov_console
License: BSD-3-Clause license
1646698200
What is face recognition? Or what is recognition? When you look at an apple fruit, your mind immediately tells you that this is an apple fruit. This process, your mind telling you that this is an apple fruit is recognition in simple words. So what is face recognition then? I am sure you have guessed it right. When you look at your friend walking down the street or a picture of him, you recognize that he is your friend Paulo. Interestingly when you look at your friend or a picture of him you look at his face first before looking at anything else. Ever wondered why you do that? This is so that you can recognize him by looking at his face. Well, this is you doing face recognition.
But the real question is how does face recognition works? It is quite simple and intuitive. Take a real life example, when you meet someone first time in your life you don't recognize him, right? While he talks or shakes hands with you, you look at his face, eyes, nose, mouth, color and overall look. This is your mind learning or training for the face recognition of that person by gathering face data. Then he tells you that his name is Paulo. At this point your mind knows that the face data it just learned belongs to Paulo. Now your mind is trained and ready to do face recognition on Paulo's face. Next time when you will see Paulo or his face in a picture you will immediately recognize him. This is how face recognition work. The more you will meet Paulo, the more data your mind will collect about Paulo and especially his face and the better you will become at recognizing him.
Now the next question is how to code face recognition with OpenCV, after all this is the only reason why you are reading this article, right? OK then. You might say that our mind can do these things easily but to actually code them into a computer is difficult? Don't worry, it is not. Thanks to OpenCV, coding face recognition is as easier as it feels. The coding steps for face recognition are same as we discussed it in real life example above.
OpenCV comes equipped with built in face recognizer, all you have to do is feed it the face data. It's that simple and this how it will look once we are done coding it.
OpenCV has three built in face recognizers and thanks to OpenCV's clean coding, you can use any of them by just changing a single line of code. Below are the names of those face recognizers and their OpenCV calls.
cv2.face.createEigenFaceRecognizer()
cv2.face.createFisherFaceRecognizer()
cv2.face.createLBPHFaceRecognizer()
We have got three face recognizers but do you know which one to use and when? Or which one is better? I guess not. So why not go through a brief summary of each, what you say? I am assuming you said yes :) So let's dive into the theory of each.
This algorithm considers the fact that not all parts of a face are equally important and equally useful. When you look at some one you recognize him/her by his distinct features like eyes, nose, cheeks, forehead and how they vary with respect to each other. So you are actually focusing on the areas of maximum change (mathematically speaking, this change is variance) of the face. For example, from eyes to nose there is a significant change and same is the case from nose to mouth. When you look at multiple faces you compare them by looking at these parts of the faces because these parts are the most useful and important components of a face. Important because they catch the maximum change among faces, change the helps you differentiate one face from the other. This is exactly how EigenFaces face recognizer works.
EigenFaces face recognizer looks at all the training images of all the persons as a whole and try to extract the components which are important and useful (the components that catch the maximum variance/change) and discards the rest of the components. This way it not only extracts the important components from the training data but also saves memory by discarding the less important components. These important components it extracts are called principal components. Below is an image showing the principal components extracted from a list of faces.
Principal Components source
You can see that principal components actually represent faces and these faces are called eigen faces and hence the name of the algorithm.
So this is how EigenFaces face recognizer trains itself (by extracting principal components). Remember, it also keeps a record of which principal component belongs to which person. One thing to note in above image is that Eigenfaces algorithm also considers illumination as an important component.
Later during recognition, when you feed a new image to the algorithm, it repeats the same process on that image as well. It extracts the principal component from that new image and compares that component with the list of components it stored during training and finds the component with the best match and returns the person label associated with that best match component.
Easy peasy, right? Next one is easier than this one.
This algorithm is an improved version of EigenFaces face recognizer. Eigenfaces face recognizer looks at all the training faces of all the persons at once and finds principal components from all of them combined. By capturing principal components from all the of them combined you are not focusing on the features that discriminate one person from the other but the features that represent all the persons in the training data as a whole.
This approach has drawbacks, for example, images with sharp changes (like light changes which is not a useful feature at all) may dominate the rest of the images and you may end up with features that are from external source like light and are not useful for discrimination at all. In the end, your principal components will represent light changes and not the actual face features.
Fisherfaces algorithm, instead of extracting useful features that represent all the faces of all the persons, it extracts useful features that discriminate one person from the others. This way features of one person do not dominate over the others and you have the features that discriminate one person from the others.
Below is an image of features extracted using Fisherfaces algorithm.
Fisher Faces source
You can see that features extracted actually represent faces and these faces are called fisher faces and hence the name of the algorithm.
One thing to note here is that even in Fisherfaces algorithm if multiple persons have images with sharp changes due to external sources like light they will dominate over other features and affect recognition accuracy.
Getting bored with this theory? Don't worry, only one face recognizer is left and then we will dive deep into the coding part.
I wrote a detailed explaination on Local Binary Patterns Histograms in my previous article on face detection using local binary patterns histograms. So here I will just give a brief overview of how it works.
We know that Eigenfaces and Fisherfaces are both affected by light and in real life we can't guarantee perfect light conditions. LBPH face recognizer is an improvement to overcome this drawback.
Idea is to not look at the image as a whole instead find the local features of an image. LBPH alogrithm try to find the local structure of an image and it does that by comparing each pixel with its neighboring pixels.
Take a 3x3 window and move it one image, at each move (each local part of an image), compare the pixel at the center with its neighbor pixels. The neighbors with intensity value less than or equal to center pixel are denoted by 1 and others by 0. Then you read these 0/1 values under 3x3 window in a clockwise order and you will have a binary pattern like 11100011 and this pattern is local to some area of the image. You do this on whole image and you will have a list of local binary patterns.
LBP Labeling
Now you get why this algorithm has Local Binary Patterns in its name? Because you get a list of local binary patterns. Now you may be wondering, what about the histogram part of the LBPH? Well after you get a list of local binary patterns, you convert each binary pattern into a decimal number (as shown in above image) and then you make a histogram of all of those values. A sample histogram looks like this.
Sample Histogram
I guess this answers the question about histogram part. So in the end you will have one histogram for each face image in the training data set. That means if there were 100 images in training data set then LBPH will extract 100 histograms after training and store them for later recognition. Remember, algorithm also keeps track of which histogram belongs to which person.
Later during recognition, when you will feed a new image to the recognizer for recognition it will generate a histogram for that new image, compare that histogram with the histograms it already has, find the best match histogram and return the person label associated with that best match histogram.
Below is a list of faces and their respective local binary patterns images. You can see that the LBP images are not affected by changes in light conditions.
LBP Faces source
The theory part is over and now comes the coding part! Ready to dive into coding? Let's get into it then.
Coding Face Recognition with OpenCV
The Face Recognition process in this tutorial is divided into three steps.
[There should be a visualization diagram for above steps here]
To detect faces, I will use the code from my previous article on face detection. So if you have not read it, I encourage you to do so to understand how face detection works and its Python coding.
Before starting the actual coding we need to import the required modules for coding. So let's import them first.
#import OpenCV module
import cv2
#import os module for reading training data directories and paths
import os
#import numpy to convert python lists to numpy arrays as
#it is needed by OpenCV face recognizers
import numpy as np
#matplotlib for display our images
import matplotlib.pyplot as plt
%matplotlib inline
The more images used in training the better. Normally a lot of images are used for training a face recognizer so that it can learn different looks of the same person, for example with glasses, without glasses, laughing, sad, happy, crying, with beard, without beard etc. To keep our tutorial simple we are going to use only 12 images for each person.
So our training data consists of total 2 persons with 12 images of each person. All training data is inside training-data
folder. training-data
folder contains one folder for each person and each folder is named with format sLabel (e.g. s1, s2)
where label is actually the integer label assigned to that person. For example folder named s1 means that this folder contains images for person 1. The directory structure tree for training data is as follows:
training-data
|-------------- s1
| |-- 1.jpg
| |-- ...
| |-- 12.jpg
|-------------- s2
| |-- 1.jpg
| |-- ...
| |-- 12.jpg
The test-data
folder contains images that we will use to test our face recognizer after it has been successfully trained.
As OpenCV face recognizer accepts labels as integers so we need to define a mapping between integer labels and persons actual names so below I am defining a mapping of persons integer labels and their respective names.
Note: As we have not assigned label 0
to any person so the mapping for label 0 is empty.
#there is no label 0 in our training data so subject name for index/label 0 is empty
subjects = ["", "Tom Cruise", "Shahrukh Khan"]
You may be wondering why data preparation, right? Well, OpenCV face recognizer accepts data in a specific format. It accepts two vectors, one vector is of faces of all the persons and the second vector is of integer labels for each face so that when processing a face the face recognizer knows which person that particular face belongs too.
For example, if we had 2 persons and 2 images for each person.
PERSON-1 PERSON-2
img1 img1
img2 img2
Then the prepare data step will produce following face and label vectors.
FACES LABELS
person1_img1_face 1
person1_img2_face 1
person2_img1_face 2
person2_img2_face 2
Preparing data step can be further divided into following sub-steps.
s1, s2
.sLabel
where Label
is an integer representing the label we have assigned to that subject. So for example, folder name s1
means that the subject has label 1, s2 means subject label is 2 and so on. The label extracted in this step is assigned to each face detected in the next step.[There should be a visualization for above steps here]
Did you read my last article on face detection? No? Then you better do so right now because to detect faces, I am going to use the code from my previous article on face detection. So if you have not read it, I encourage you to do so to understand how face detection works and its coding. Below is the same code.
#function to detect face using OpenCV
def detect_face(img):
#convert the test image to gray image as opencv face detector expects gray images
gray = cv2.cvtColor(img, cv2.COLOR_BGR2GRAY)
#load OpenCV face detector, I am using LBP which is fast
#there is also a more accurate but slow Haar classifier
face_cascade = cv2.CascadeClassifier('opencv-files/lbpcascade_frontalface.xml')
#let's detect multiscale (some images may be closer to camera than others) images
#result is a list of faces
faces = face_cascade.detectMultiScale(gray, scaleFactor=1.2, minNeighbors=5);
#if no faces are detected then return original img
if (len(faces) == 0):
return None, None
#under the assumption that there will be only one face,
#extract the face area
(x, y, w, h) = faces[0]
#return only the face part of the image
return gray[y:y+w, x:x+h], faces[0]
I am using OpenCV's LBP face detector. On line 4, I convert the image to grayscale because most operations in OpenCV are performed in gray scale, then on line 8 I load LBP face detector using cv2.CascadeClassifier
class. After that on line 12 I use cv2.CascadeClassifier
class' detectMultiScale
method to detect all the faces in the image. on line 20, from detected faces I only pick the first face because in one image there will be only one face (under the assumption that there will be only one prominent face). As faces returned by detectMultiScale
method are actually rectangles (x, y, width, height) and not actual faces images so we have to extract face image area from the main image. So on line 23 I extract face area from gray image and return both the face image area and face rectangle.
Now you have got a face detector and you know the 4 steps to prepare the data, so are you ready to code the prepare data step? Yes? So let's do it.
#this function will read all persons' training images, detect face from each image
#and will return two lists of exactly same size, one list
# of faces and another list of labels for each face
def prepare_training_data(data_folder_path):
#------STEP-1--------
#get the directories (one directory for each subject) in data folder
dirs = os.listdir(data_folder_path)
#list to hold all subject faces
faces = []
#list to hold labels for all subjects
labels = []
#let's go through each directory and read images within it
for dir_name in dirs:
#our subject directories start with letter 's' so
#ignore any non-relevant directories if any
if not dir_name.startswith("s"):
continue;
#------STEP-2--------
#extract label number of subject from dir_name
#format of dir name = slabel
#, so removing letter 's' from dir_name will give us label
label = int(dir_name.replace("s", ""))
#build path of directory containin images for current subject subject
#sample subject_dir_path = "training-data/s1"
subject_dir_path = data_folder_path + "/" + dir_name
#get the images names that are inside the given subject directory
subject_images_names = os.listdir(subject_dir_path)
#------STEP-3--------
#go through each image name, read image,
#detect face and add face to list of faces
for image_name in subject_images_names:
#ignore system files like .DS_Store
if image_name.startswith("."):
continue;
#build image path
#sample image path = training-data/s1/1.pgm
image_path = subject_dir_path + "/" + image_name
#read image
image = cv2.imread(image_path)
#display an image window to show the image
cv2.imshow("Training on image...", image)
cv2.waitKey(100)
#detect face
face, rect = detect_face(image)
#------STEP-4--------
#for the purpose of this tutorial
#we will ignore faces that are not detected
if face is not None:
#add face to list of faces
faces.append(face)
#add label for this face
labels.append(label)
cv2.destroyAllWindows()
cv2.waitKey(1)
cv2.destroyAllWindows()
return faces, labels
I have defined a function that takes the path, where training subjects' folders are stored, as parameter. This function follows the same 4 prepare data substeps mentioned above.
(step-1) On line 8 I am using os.listdir
method to read names of all folders stored on path passed to function as parameter. On line 10-13 I am defining labels and faces vectors.
(step-2) After that I traverse through all subjects' folder names and from each subject's folder name on line 27 I am extracting the label information. As folder names follow the sLabel
naming convention so removing the letter s
from folder name will give us the label assigned to that subject.
(step-3) On line 34, I read all the images names of of the current subject being traversed and on line 39-66 I traverse those images one by one. On line 53-54 I am using OpenCV's imshow(window_title, image)
along with OpenCV's waitKey(interval)
method to display the current image being traveresed. The waitKey(interval)
method pauses the code flow for the given interval (milliseconds), I am using it with 100ms interval so that we can view the image window for 100ms. On line 57, I detect face from the current image being traversed.
(step-4) On line 62-66, I add the detected face and label to their respective vectors.
But a function can't do anything unless we call it on some data that it has to prepare, right? Don't worry, I have got data of two beautiful and famous celebrities. I am sure you will recognize them!
Let's call this function on images of these beautiful celebrities to prepare data for training of our Face Recognizer. Below is a simple code to do that.
#let's first prepare our training data
#data will be in two lists of same size
#one list will contain all the faces
#and other list will contain respective labels for each face
print("Preparing data...")
faces, labels = prepare_training_data("training-data")
print("Data prepared")
#print total faces and labels
print("Total faces: ", len(faces))
print("Total labels: ", len(labels))
Preparing data...
Data prepared
Total faces: 23
Total labels: 23
This was probably the boring part, right? Don't worry, the fun stuff is coming up next. It's time to train our own face recognizer so that once trained it can recognize new faces of the persons it was trained on. Read? Ok then let's train our face recognizer.
As we know, OpenCV comes equipped with three face recognizers.
cv2.face.createEigenFaceRecognizer()
cv2.face.createFisherFaceRecognizer()
cv2.face.LBPHFisherFaceRecognizer()
I am going to use LBPH face recognizer but you can use any face recognizer of your choice. No matter which of the OpenCV's face recognizer you use the code will remain the same. You just have to change one line, the face recognizer initialization line given below.
#create our LBPH face recognizer
face_recognizer = cv2.face.createLBPHFaceRecognizer()
#or use EigenFaceRecognizer by replacing above line with
#face_recognizer = cv2.face.createEigenFaceRecognizer()
#or use FisherFaceRecognizer by replacing above line with
#face_recognizer = cv2.face.createFisherFaceRecognizer()
Now that we have initialized our face recognizer and we also have prepared our training data, it's time to train the face recognizer. We will do that by calling the train(faces-vector, labels-vector)
method of face recognizer.
#train our face recognizer of our training faces
face_recognizer.train(faces, np.array(labels))
Did you notice that instead of passing labels
vector directly to face recognizer I am first converting it to numpy array? This is because OpenCV expects labels vector to be a numpy
array.
Still not satisfied? Want to see some action? Next step is the real action, I promise!
Now comes my favorite part, the prediction part. This is where we actually get to see if our algorithm is actually recognizing our trained subjects's faces or not. We will take two test images of our celeberities, detect faces from each of them and then pass those faces to our trained face recognizer to see if it recognizes them.
Below are some utility functions that we will use for drawing bounding box (rectangle) around face and putting celeberity name near the face bounding box.
#function to draw rectangle on image
#according to given (x, y) coordinates and
#given width and heigh
def draw_rectangle(img, rect):
(x, y, w, h) = rect
cv2.rectangle(img, (x, y), (x+w, y+h), (0, 255, 0), 2)
#function to draw text on give image starting from
#passed (x, y) coordinates.
def draw_text(img, text, x, y):
cv2.putText(img, text, (x, y), cv2.FONT_HERSHEY_PLAIN, 1.5, (0, 255, 0), 2)
First function draw_rectangle
draws a rectangle on image based on passed rectangle coordinates. It uses OpenCV's built in function cv2.rectangle(img, topLeftPoint, bottomRightPoint, rgbColor, lineWidth)
to draw rectangle. We will use it to draw a rectangle around the face detected in test image.
Second function draw_text
uses OpenCV's built in function cv2.putText(img, text, startPoint, font, fontSize, rgbColor, lineWidth)
to draw text on image.
Now that we have the drawing functions, we just need to call the face recognizer's predict(face)
method to test our face recognizer on test images. Following function does the prediction for us.
#this function recognizes the person in image passed
#and draws a rectangle around detected face with name of the
#subject
def predict(test_img):
#make a copy of the image as we don't want to chang original image
img = test_img.copy()
#detect face from the image
face, rect = detect_face(img)
#predict the image using our face recognizer
label= face_recognizer.predict(face)
#get name of respective label returned by face recognizer
label_text = subjects[label]
#draw a rectangle around face detected
draw_rectangle(img, rect)
#draw name of predicted person
draw_text(img, label_text, rect[0], rect[1]-5)
return img
predict(face)
method. This method will return a lableNow that we have the prediction function well defined, next step is to actually call this function on our test images and display those test images to see if our face recognizer correctly recognized them. So let's do it. This is what we have been waiting for.
print("Predicting images...")
#load test images
test_img1 = cv2.imread("test-data/test1.jpg")
test_img2 = cv2.imread("test-data/test2.jpg")
#perform a prediction
predicted_img1 = predict(test_img1)
predicted_img2 = predict(test_img2)
print("Prediction complete")
#create a figure of 2 plots (one for each test image)
f, (ax1, ax2) = plt.subplots(1, 2, figsize=(10, 5))
#display test image1 result
ax1.imshow(cv2.cvtColor(predicted_img1, cv2.COLOR_BGR2RGB))
#display test image2 result
ax2.imshow(cv2.cvtColor(predicted_img2, cv2.COLOR_BGR2RGB))
#display both images
cv2.imshow("Tom cruise test", predicted_img1)
cv2.imshow("Shahrukh Khan test", predicted_img2)
cv2.waitKey(0)
cv2.destroyAllWindows()
cv2.waitKey(1)
cv2.destroyAllWindows()
Predicting images...
Prediction complete
wohooo! Is'nt it beautiful? Indeed, it is!
Face Recognition is a fascinating idea to work on and OpenCV has made it extremely simple and easy for us to code it. It just takes a few lines of code to have a fully working face recognition application and we can switch between all three face recognizers with a single line of code change. It's that simple.
Although EigenFaces, FisherFaces and LBPH face recognizers are good but there are even better ways to perform face recognition like using Histogram of Oriented Gradients (HOGs) and Neural Networks. So the more advanced face recognition algorithms are now a days implemented using a combination of OpenCV and Machine learning. I have plans to write some articles on those more advanced methods as well, so stay tuned!
Download Details:
Author: informramiz
Source Code: https://github.com/informramiz/opencv-face-recognition-python
License: MIT License
1604387106
Amid the COVID-19 crisis, on-demand grocery delivery apps are a massive hit. The convenience of availing doorstep essentials, coupled with accessing a wide range of products, is propelling the need for grocery delivery apps more than ever. Even retail grocery outlets are eyeing to explore the online market space due to its massive popularity and revenue.
Are you an entrepreneur aiming to initiate your Bigbasket clone app development? If so, are you ready to tackle the COVID-19 pandemic situation? This blog discusses a few vital strategies that can make an app like Bigbasket combat the outbreak situation comprehensively.
Take care of your supply chain: Coping with the demands require an all-encompassing, ever-present supply chain. Your supply chain includes grocery suppliers and delivery chains. Hence, maintain adequate stock and build healthy relationships with suppliers to avoid the risk of running out of groceries.
Equip your delivery chain with safety gear: Your delivery chain comes in direct contact with your customers. Hence, provide safety gear like gloves, masks, sanitizers, etc., to your delivery drivers. Besides, automating the process of verifying your delivery drivers on their safety standards with a face mask recognition software can come in handy.
Encourage contactless delivery options: One of the significant reasons for virus transmission is through external physical contact. Eliminate any form of contact in your on-demand grocery delivery app ecosystem by encouraging your delivery workers to adapt to ‘zero-contact’ deliveries. This way, instead of handing over grocery orders directly, your delivery drivers will drop them at discrete spots suggested by customers.
Safety badges for grocery stores: Building the trust factor is crucial to sustain in the market. Hence, scrutinize grocery stores on their safety standards and provide safety badges to them. By displaying safety badges near their names, the customer trust-factor enhances substantially.
Summing up,
Surviving the outbreak situation will open the floodgates to unrestricted revenue for an entrepreneur. Implement these strategies in your Bigbasket clone script and scale your grocery delivery app among the masses.
#bigbasket clone #bigbasket clone app #bigbasket clone app development #bigbasket clone app development company #bigbasket clone script #bigbasket app clone
1609310382
The reseller e-commerce business has become the talk of the town. The reason for the huge eminence of reseller e-commerce in today’s world is due to the benefits it assures. With a robust and user-friendly app, your reseller e-commerce business is sure to take you places.
Before stepping out to invest in the reseller e-commerce business, consider launching an app like Letgo that is armed with several rich features. In this blog, you will get to know the list of features of the app that will make your reseller business a sure-shot success.
Inventory management- Sellers can restock and manage the inventory without any hassles.
Catalog management- Sellers can sort and group their products under categories. Also, sellers can add or remove products from the catalog in just a few taps.
Diverse payment options- The app offers users with multiple payment options with which they can make their payments by choosing the convenient payment option.
Push notifications- The push notifications are integral in keeping users up-to-date with your app. Alerts regarding new product arrival, offers, updates, etc. can be sent via push notifications.
These are the features that will give an extraordinary user experience. Let us next learn about the revenue model of the app.
Commission fees- Sellers who get orders via your app will pay commission fees. Associating with multiple sellers will amplify the revenue through commission fees.
Promotional content- You can promote sellers’ products on your app under the digital banner section.
Premium services- You can allow access to certain advanced features on a paid basis to users. Formulate monthly or yearly subscription plans so that users can choose their convenient plan.
Conclusion
The Letgo clone script is the best reseller e-commerce app to invest in. Head towards to find the app developer who has experience in building clone apps with robust functional models.
##letgo clone ##letgo clone app ##letgo clone script ##letgo clone app development ##letgo clone software ##letgo clone open source