1582344060
V-Calendar is a clean and lightweight plugin for displaying simple, attributed calendars in Vue.js. It uses attributes to decorate the calendar with various visual indicators including highlighted date regions, dots, bars, content styles and popovers for simple tooltips and even custom slot content.
Animated calendar/datepicker that displays regions, indicators and day popovers for simple & recurring dates.
Any single attribute may contain one of each object and can be displayed for single dates, date ranges and even complex date patterns like every other Friday, the 15th of every month or the last Friday of every other month.
It has date picker support out of the box with single date, multiple date and date range selection modes. Because v-date-picker is simply a wrapper for v-calendar, both can be extensively customized using props, slots and theme styling, just like v-calendar. And of course, V-Calendar is responsive and mobile friendly. For example, it supports touch swipes for month navigation.
v-calendar
is the core component. By default, it has a neutral design that should blend nicely within any web application, with various options for configuring the basic layout:
Along with the calendar panes, v-calendar
employs a semantic-inspired navigation pane when the header title is hovered or focused by the user.
Attributes are the most important concept to understand. They provide a powerful way to communicate visual information to your users quickly and effectively. Fortunately, they are also easy to specify.
The first thing to understand about attributes is what they are capable of displaying.
For now, let’s just start by displaying a simple highlight on today’s date.
<template>
<v-calendar :attributes='attrs'>
</v-calendar>
</template>
export default {
data() {
return {
attrs: [
{
key: 'today',
highlight: {
backgroundColor: '#ff8080',
// Other properties are available too, like `height` & `borderRadius`
},
dates: new Date(2018, 0, 1)
}
],
};
},
};
To add some contrast to the highlighted date, we can use a content style, which is simply a style object that gets applied to the day content text.
export default {
data() {
return {
attrs: [
{
key: 'today',
highlight: {
backgroundColor: '#ff8080',
},
// Just use a normal style
contentStyle: {
color: '#fafafa',
},
dates: new Date(2018, 0, 1)
},
],
};
},
};
Finally, let’s see how simple it is to add a popover label (or tooltip) to the calendar when this highlight is hovered over. To do that, we just need to add a popover section to our attribute.
export default {
data() {
return {
attrs: [
{
key: 'today',
dates: new Date(2018, 0, 1),
highlight: {
backgroundColor: '#ff8080',
},
// Just use a normal style
contentStyle: {
color: '#fafafa',
},
// Our new popover here
popover: {
label: 'You just hovered over today\'s date!',
}
},
],
};
},
};
The second aspect of attributes is specifying where to display them. In the previous example, we saw that all we had to do was use a simple date object assigned to the dates
property. Note that we aren’t limited to using single date or date range objects. We can also use an array of dates.
...
dates: [ new Date(2018, 0, 1), new Date(2018, 0, 15) ]
...
Or date ranges…
...
dates: [
{ start: new Date(2018, 0, 1), end: new Date(2018, 0, 5) },
{ start: new Date(2018, 0, 15), span: 5 } // Span is number of days
]
...
Or date patterns.
...
dates: { weekdays: [1, 7] } // On the weekends
...
The v-date-picker
component is a flexible date picker component wrapper around v-calendar
, which means it supports all props and events that v-calendar
does. Using the mode
prop, it is capable of 3 selection modes:
Date pickers can be displayed inline or as a popover for an input element which can be classed or styled.
<v-date-picker
mode='range'
v-model='selectedDate'
show-caps>
</v-date-picker>
export default {
data() {
return {
selectedDate: {
start: new Date(2018, 0, 9),
end: new Date(2018, 0, 18)
}
};
},
};
Also, a custom slot element can be used to display your own input element. This example uses Buefy for a custom styled input component.
<v-date-picker
mode='single'
v-model='selectedDate'>
<b-field :type='inputState.type' slot-scope='props'>
<b-input
type='text'
icon='calendar'
:value='props.inputValue'
:placeholder='inputState.message'
@change.native='props.updateValue($event.target.value)'
expanded>
</b-input>
<p
class='control'
v-if='selectedValue'>
<a
:class='["button", inputState.type]'
@click='selectedValue = null'>
Clear
</a>
</p>
</b-field>
</v-date-picker>
export default {
data() {
return {
selectedDate: new Date(2018, 0, 10)
};
},
computed: {
inputState() {
if (!this.selectedValue) {
return {
type: 'is-danger',
message: 'Date required.',
};
}
return {
type: 'is-primary',
message: '',
};
},
},
};
You can disable dates, date ranges and date patterns using the following props:
Explicitly via disabled-dates
.
<!--Disable weekend selection-->
<v-date-picker
:mode='single'
:disabled-dates='{ weekdays: [1, 7] }'
v-model='selectedDate'>
</v-date-picker>
Implicitly via available-dates
. Any dates not included in available-dates
are disabled.
<!--Today is the minimum date (null denotes infinite date)-->
<v-date-picker
:mode='single'
:available-dates='{ start: new Date(), end: null }'
v-model='selectedDate'>
</v-date-picker>
Vue.js version 2.5+ is required.
npm install v-calendar
import Vue from 'vue';
import VCalendar from 'v-calendar';
import 'v-calendar/lib/v-calendar.min.css';
// Use v-calendar, v-date-picker & v-popover components
Vue.use(VCalendar);
<template>
<v-calendar
is-double-paned>
</v-calendar>
<v-date-picker
mode='single'
v-model='selectedValue'>
</v-date-picker>
</template>
<script>
export default {
data() {
return {
selectedValue: new Date(),
};
},
};
</script>
<html>
<head>
<meta charset='utf-8'>
<meta name='viewport' content='width=device-width, initial-scale=1, shrink-to-fit=no'>
<meta http-equiv='x-ua-compatible' content='ie=edge'>
<!--1\. Link VCalendar CSS-->
<link rel='stylesheet' href='https://unpkg.com/v-calendar/lib/v-calendar.min.css'>
</head>
<body>
<div id='app'>
<v-calendar></v-calendar>
<v-date-picker :mode='mode' v-model='selectedDate'></v-date-picker>
</div>
<!--2\. Link Vue Javascript-->
<script src='https://unpkg.com/vue/dist/vue.js'></script>
<!--3\. Link VCalendar Javascript (Plugin automatically installed)-->
<script src='https://unpkg.com/v-calendar'></script>
<!--4\. Create the Vue instance-->
<script>
new Vue({
el: '#app',
data: {
// Data used by the date picker
mode: 'single',
selectedDate: null,
}
})
</script>
</body>
</html>
Author: nathanreyes
Live Demo: https://vcalendar.io/
GitHub: https://github.com/nathanreyes/v-calendar
#vuejs #javascript #vue-js
1604649613
Specializing in commercial cleaning, office cleaning, hospital & GP surgery cleaning, residential cleaning, washroom cleaning, school cleaning, Covid cleaning and sanitization, ECS Commercial Cleaning Company London has built up a large, experienced team of highly-skilled team of professionals who ensures work is delivered to highest standards, on time and on budget.
At ECS Commercial Cleaning, we provide a safe, cost-effective and efficient service that covers all your commercial cleaning needs. From residential cleaning, washroom cleaning, school cleaning to office cleaning, hospital & GP surgery cleaning, we cater it all. We have years of experience with all kinds of projects and know the best approach to save you time and money. Our professional knowledge and skills has enabled us to provide high quality cleaning solutions throughout London.
We’ve been delivering commercial cleaning services throughout London with the help of trained and experienced professionals, using only the finest equipment and cleaning solutions. Our team starts cleaning project from initial consultation through to completion on budget and schedule.
ECS Commercial Cleaning strives to keep people first, investing in their professional training and safety. We work hard to create and sustain an environment that helps us to achieve clients’ expectations through consistently excellent service and minimal downtime.
Our Services
With 10 years of market experience, a resource of professional employees and coverage throughout the London, ECS Commercial Cleaning has established itself as one of the leading commercial cleaning company, offering valuable cleaning solutions including:
Our clients are the London’s leading retail outlets, office buildings and office premises, schools, hospitals, production and industrial premises and others. Our cleaning solutions offers peace of mind to clients and most importantly ensure that workers are able to do their jobs comfortably and efficiently without compromising safety. Over the years of industry experience, we remain at the forefront of our industry due to our unparalleled customer dedication and unrivalled experience in providing safe, and cost-effective cleaning solutions.
Our Expert Team
ECS Commercial Cleaning provides you with an expert team that completes your cleaning project professionally and efficiently. No matter what cleaning service you require, our aim is to work closely with our clients in order to comprehend their needs and fulfil their specific requirements through tailored cleaning solutions.
With our eco-friendly cleaning products and a team of experienced professionals, we can provide timely cleaning solutions to our clients. Contact ECS Commercial Cleaning on 0161 5462235.
#cleaning #commercial cleaning #office cleaning #residential cleaning #washroom cleaning #covid cleaning and sanitization
1622722796
WordPress needs no introduction. It has been in the world for quite a long time. And up till now, it has given a tough fight to leading web development technology. The main reason behind its remarkable success is, it is highly customizable and also SEO-friendly. Other benefits include open-source technology, security, user-friendliness, and the thousands of free plugins it offers.
Talking of WordPress plugins, are a piece of software that enables you to add more features to the website. They are easy to integrate into your website and don’t hamper the performance of the site. WordPress, as a leading technology, has to offer many out-of-the-box plugins.
However, not always the WordPress would be able to meet your all needs. Hence you have to customize the WordPress plugin to provide you the functionality you wished. WordPress Plugins are easy to install and customize. You don’t have to build the solution from scratch and that’s one of the reasons why small and medium-sized businesses love it. It doesn’t need a hefty investment or the hiring of an in-house development team. You can use the core functionality of the plugin and expand it as your like.
In this blog, we would be talking in-depth about plugins and how to customize WordPress plugins to improve the functionality of your web applications.
What Is The Working Of The WordPress Plugins?
Developing your own plugin requires you to have some knowledge of the way they work. It ensures the better functioning of the customized plugins and avoids any mistakes that can hamper the experience on your site.
1. Hooks
Plugins operate primarily using hooks. As a hook attaches you to something, the same way a feature or functionality is hooked to your website. The piece of code interacts with the other components present on the website. There are two types of hooks: a. Action and b. Filter.
A. Action
If you want something to happen at a particular time, you need to use a WordPress “action” hook. With actions, you can add, change and improve the functionality of your plugin. It allows you to attach a new action that can be triggered by your users on the website.
There are several predefined actions available on WordPress, custom WordPress plugin development also allows you to develop your own action. This way you can make your plugin function as your want. It also allows you to set values for which the hook function. The add_ action function will then connect that function to a specific action.
B. Filters
They are the type of hooks that are accepted to a single variable or a series of variables. It sends them back after they have modified it. It allows you to change the content displayed to the user.
You can add the filter on your website with the apply_filter function, then you can define the filter under the function. To add a filter hook on the website, you have to add the $tag (the filter name) and $value (the filtered value or variable), this allows the hook to work. Also, you can add extra function values under $var.
Once you have made your filter, you can execute it with the add_filter function. This will activate your filter and would work when a specific function is triggered. You can also manipulate the variable and return it.
2. Shortcodes
Shortcodes are a good way to create and display the custom functionality of your website to visitors. They are client-side bits of code. They can be placed in the posts and pages like in the menu and widgets, etc.
There are many plugins that use shortcodes. By creating your very own shortcode, you too can customize the WordPress plugin. You can create your own shortcode with the add_shortcode function. The name of the shortcode that you use would be the first variable and the second variable would be the output of it when it is triggered. The output can be – attributes, content, and name.
3. Widgets
Other than the hooks and shortcodes, you can use the widgets to add functionality to the site. WordPress Widgets are a good way to create a widget by extending the WP_Widget class. They render a user-friendly experience, as they have an object-oriented design approach and the functions and values are stored in a single entity.
How To Customize WordPress Plugins?
There are various methods to customize the WordPress plugins. Depending on your need, and the degree of customization you wish to make in the plugin, choose the right option for you. Also, don’t forget to keep in mind that it requires a little bit of technical knowledge too. So find an expert WordPress plugin development company in case you lack the knowledge to do it by yourself.
1. Hire A Plugin Developer3
One of the best ways to customize a WordPress plugin is by hiring a plugin developer. There are many plugin developers listed in the WordPress directory. You can contact them and collaborate with world-class WordPress developers. It is quite easy to find a WordPress plugin developer.
Since it is not much work and doesn’t pay well or for the long term a lot of developers would be unwilling to collaborate but, you will eventually find people.
2. Creating A Supporting Plugin
If you are looking for added functionality in an already existing plugin go for this option. It is a cheap way to meet your needs and creating a supporting plugin takes very little time as it has very limited needs. Furthermore, you can extend a plugin to a current feature set without altering its base code.
However, to do so, you have to hire a WordPress developer as it also requires some technical knowledge.
3. Use Custom Hooks
Use the WordPress hooks to integrate some other feature into an existing plugin. You can add an action or a filter as per your need and improve the functionality of the website.
If the plugin you want to customize has the hook, you don’t have to do much to customize it. You can write your own plugin that works with these hooks. This way you don’t have to build a WordPress plugin right from scratch. If the hook is not present in the plugin code, you can contact a WordPress developer or write the code yourself. It may take some time, but it works.
Once the hook is added, you just have to manually patch each one upon the release of the new plugin update.
4. Override Callbacks
The last way to customize WordPress plugins is by override callbacks. You can alter the core functionality of the WordPress plugin with this method. You can completely change the way it functions with your website. It is a way to completely transform the plugin. By adding your own custom callbacks, you can create the exact functionality you desire.
We suggest you go for a web developer proficient in WordPress as this requires a good amount of technical knowledge and the working of a plugin.
#customize wordpress plugins #how to customize plugins in wordpress #how to customize wordpress plugins #how to edit plugins in wordpress #how to edit wordpress plugins #wordpress plugin customization
1646698200
What is face recognition? Or what is recognition? When you look at an apple fruit, your mind immediately tells you that this is an apple fruit. This process, your mind telling you that this is an apple fruit is recognition in simple words. So what is face recognition then? I am sure you have guessed it right. When you look at your friend walking down the street or a picture of him, you recognize that he is your friend Paulo. Interestingly when you look at your friend or a picture of him you look at his face first before looking at anything else. Ever wondered why you do that? This is so that you can recognize him by looking at his face. Well, this is you doing face recognition.
But the real question is how does face recognition works? It is quite simple and intuitive. Take a real life example, when you meet someone first time in your life you don't recognize him, right? While he talks or shakes hands with you, you look at his face, eyes, nose, mouth, color and overall look. This is your mind learning or training for the face recognition of that person by gathering face data. Then he tells you that his name is Paulo. At this point your mind knows that the face data it just learned belongs to Paulo. Now your mind is trained and ready to do face recognition on Paulo's face. Next time when you will see Paulo or his face in a picture you will immediately recognize him. This is how face recognition work. The more you will meet Paulo, the more data your mind will collect about Paulo and especially his face and the better you will become at recognizing him.
Now the next question is how to code face recognition with OpenCV, after all this is the only reason why you are reading this article, right? OK then. You might say that our mind can do these things easily but to actually code them into a computer is difficult? Don't worry, it is not. Thanks to OpenCV, coding face recognition is as easier as it feels. The coding steps for face recognition are same as we discussed it in real life example above.
OpenCV comes equipped with built in face recognizer, all you have to do is feed it the face data. It's that simple and this how it will look once we are done coding it.
OpenCV has three built in face recognizers and thanks to OpenCV's clean coding, you can use any of them by just changing a single line of code. Below are the names of those face recognizers and their OpenCV calls.
cv2.face.createEigenFaceRecognizer()
cv2.face.createFisherFaceRecognizer()
cv2.face.createLBPHFaceRecognizer()
We have got three face recognizers but do you know which one to use and when? Or which one is better? I guess not. So why not go through a brief summary of each, what you say? I am assuming you said yes :) So let's dive into the theory of each.
This algorithm considers the fact that not all parts of a face are equally important and equally useful. When you look at some one you recognize him/her by his distinct features like eyes, nose, cheeks, forehead and how they vary with respect to each other. So you are actually focusing on the areas of maximum change (mathematically speaking, this change is variance) of the face. For example, from eyes to nose there is a significant change and same is the case from nose to mouth. When you look at multiple faces you compare them by looking at these parts of the faces because these parts are the most useful and important components of a face. Important because they catch the maximum change among faces, change the helps you differentiate one face from the other. This is exactly how EigenFaces face recognizer works.
EigenFaces face recognizer looks at all the training images of all the persons as a whole and try to extract the components which are important and useful (the components that catch the maximum variance/change) and discards the rest of the components. This way it not only extracts the important components from the training data but also saves memory by discarding the less important components. These important components it extracts are called principal components. Below is an image showing the principal components extracted from a list of faces.
Principal Components source
You can see that principal components actually represent faces and these faces are called eigen faces and hence the name of the algorithm.
So this is how EigenFaces face recognizer trains itself (by extracting principal components). Remember, it also keeps a record of which principal component belongs to which person. One thing to note in above image is that Eigenfaces algorithm also considers illumination as an important component.
Later during recognition, when you feed a new image to the algorithm, it repeats the same process on that image as well. It extracts the principal component from that new image and compares that component with the list of components it stored during training and finds the component with the best match and returns the person label associated with that best match component.
Easy peasy, right? Next one is easier than this one.
This algorithm is an improved version of EigenFaces face recognizer. Eigenfaces face recognizer looks at all the training faces of all the persons at once and finds principal components from all of them combined. By capturing principal components from all the of them combined you are not focusing on the features that discriminate one person from the other but the features that represent all the persons in the training data as a whole.
This approach has drawbacks, for example, images with sharp changes (like light changes which is not a useful feature at all) may dominate the rest of the images and you may end up with features that are from external source like light and are not useful for discrimination at all. In the end, your principal components will represent light changes and not the actual face features.
Fisherfaces algorithm, instead of extracting useful features that represent all the faces of all the persons, it extracts useful features that discriminate one person from the others. This way features of one person do not dominate over the others and you have the features that discriminate one person from the others.
Below is an image of features extracted using Fisherfaces algorithm.
Fisher Faces source
You can see that features extracted actually represent faces and these faces are called fisher faces and hence the name of the algorithm.
One thing to note here is that even in Fisherfaces algorithm if multiple persons have images with sharp changes due to external sources like light they will dominate over other features and affect recognition accuracy.
Getting bored with this theory? Don't worry, only one face recognizer is left and then we will dive deep into the coding part.
I wrote a detailed explaination on Local Binary Patterns Histograms in my previous article on face detection using local binary patterns histograms. So here I will just give a brief overview of how it works.
We know that Eigenfaces and Fisherfaces are both affected by light and in real life we can't guarantee perfect light conditions. LBPH face recognizer is an improvement to overcome this drawback.
Idea is to not look at the image as a whole instead find the local features of an image. LBPH alogrithm try to find the local structure of an image and it does that by comparing each pixel with its neighboring pixels.
Take a 3x3 window and move it one image, at each move (each local part of an image), compare the pixel at the center with its neighbor pixels. The neighbors with intensity value less than or equal to center pixel are denoted by 1 and others by 0. Then you read these 0/1 values under 3x3 window in a clockwise order and you will have a binary pattern like 11100011 and this pattern is local to some area of the image. You do this on whole image and you will have a list of local binary patterns.
LBP Labeling
Now you get why this algorithm has Local Binary Patterns in its name? Because you get a list of local binary patterns. Now you may be wondering, what about the histogram part of the LBPH? Well after you get a list of local binary patterns, you convert each binary pattern into a decimal number (as shown in above image) and then you make a histogram of all of those values. A sample histogram looks like this.
Sample Histogram
I guess this answers the question about histogram part. So in the end you will have one histogram for each face image in the training data set. That means if there were 100 images in training data set then LBPH will extract 100 histograms after training and store them for later recognition. Remember, algorithm also keeps track of which histogram belongs to which person.
Later during recognition, when you will feed a new image to the recognizer for recognition it will generate a histogram for that new image, compare that histogram with the histograms it already has, find the best match histogram and return the person label associated with that best match histogram.
Below is a list of faces and their respective local binary patterns images. You can see that the LBP images are not affected by changes in light conditions.
LBP Faces source
The theory part is over and now comes the coding part! Ready to dive into coding? Let's get into it then.
Coding Face Recognition with OpenCV
The Face Recognition process in this tutorial is divided into three steps.
[There should be a visualization diagram for above steps here]
To detect faces, I will use the code from my previous article on face detection. So if you have not read it, I encourage you to do so to understand how face detection works and its Python coding.
Before starting the actual coding we need to import the required modules for coding. So let's import them first.
#import OpenCV module
import cv2
#import os module for reading training data directories and paths
import os
#import numpy to convert python lists to numpy arrays as
#it is needed by OpenCV face recognizers
import numpy as np
#matplotlib for display our images
import matplotlib.pyplot as plt
%matplotlib inline
The more images used in training the better. Normally a lot of images are used for training a face recognizer so that it can learn different looks of the same person, for example with glasses, without glasses, laughing, sad, happy, crying, with beard, without beard etc. To keep our tutorial simple we are going to use only 12 images for each person.
So our training data consists of total 2 persons with 12 images of each person. All training data is inside training-data
folder. training-data
folder contains one folder for each person and each folder is named with format sLabel (e.g. s1, s2)
where label is actually the integer label assigned to that person. For example folder named s1 means that this folder contains images for person 1. The directory structure tree for training data is as follows:
training-data
|-------------- s1
| |-- 1.jpg
| |-- ...
| |-- 12.jpg
|-------------- s2
| |-- 1.jpg
| |-- ...
| |-- 12.jpg
The test-data
folder contains images that we will use to test our face recognizer after it has been successfully trained.
As OpenCV face recognizer accepts labels as integers so we need to define a mapping between integer labels and persons actual names so below I am defining a mapping of persons integer labels and their respective names.
Note: As we have not assigned label 0
to any person so the mapping for label 0 is empty.
#there is no label 0 in our training data so subject name for index/label 0 is empty
subjects = ["", "Tom Cruise", "Shahrukh Khan"]
You may be wondering why data preparation, right? Well, OpenCV face recognizer accepts data in a specific format. It accepts two vectors, one vector is of faces of all the persons and the second vector is of integer labels for each face so that when processing a face the face recognizer knows which person that particular face belongs too.
For example, if we had 2 persons and 2 images for each person.
PERSON-1 PERSON-2
img1 img1
img2 img2
Then the prepare data step will produce following face and label vectors.
FACES LABELS
person1_img1_face 1
person1_img2_face 1
person2_img1_face 2
person2_img2_face 2
Preparing data step can be further divided into following sub-steps.
s1, s2
.sLabel
where Label
is an integer representing the label we have assigned to that subject. So for example, folder name s1
means that the subject has label 1, s2 means subject label is 2 and so on. The label extracted in this step is assigned to each face detected in the next step.[There should be a visualization for above steps here]
Did you read my last article on face detection? No? Then you better do so right now because to detect faces, I am going to use the code from my previous article on face detection. So if you have not read it, I encourage you to do so to understand how face detection works and its coding. Below is the same code.
#function to detect face using OpenCV
def detect_face(img):
#convert the test image to gray image as opencv face detector expects gray images
gray = cv2.cvtColor(img, cv2.COLOR_BGR2GRAY)
#load OpenCV face detector, I am using LBP which is fast
#there is also a more accurate but slow Haar classifier
face_cascade = cv2.CascadeClassifier('opencv-files/lbpcascade_frontalface.xml')
#let's detect multiscale (some images may be closer to camera than others) images
#result is a list of faces
faces = face_cascade.detectMultiScale(gray, scaleFactor=1.2, minNeighbors=5);
#if no faces are detected then return original img
if (len(faces) == 0):
return None, None
#under the assumption that there will be only one face,
#extract the face area
(x, y, w, h) = faces[0]
#return only the face part of the image
return gray[y:y+w, x:x+h], faces[0]
I am using OpenCV's LBP face detector. On line 4, I convert the image to grayscale because most operations in OpenCV are performed in gray scale, then on line 8 I load LBP face detector using cv2.CascadeClassifier
class. After that on line 12 I use cv2.CascadeClassifier
class' detectMultiScale
method to detect all the faces in the image. on line 20, from detected faces I only pick the first face because in one image there will be only one face (under the assumption that there will be only one prominent face). As faces returned by detectMultiScale
method are actually rectangles (x, y, width, height) and not actual faces images so we have to extract face image area from the main image. So on line 23 I extract face area from gray image and return both the face image area and face rectangle.
Now you have got a face detector and you know the 4 steps to prepare the data, so are you ready to code the prepare data step? Yes? So let's do it.
#this function will read all persons' training images, detect face from each image
#and will return two lists of exactly same size, one list
# of faces and another list of labels for each face
def prepare_training_data(data_folder_path):
#------STEP-1--------
#get the directories (one directory for each subject) in data folder
dirs = os.listdir(data_folder_path)
#list to hold all subject faces
faces = []
#list to hold labels for all subjects
labels = []
#let's go through each directory and read images within it
for dir_name in dirs:
#our subject directories start with letter 's' so
#ignore any non-relevant directories if any
if not dir_name.startswith("s"):
continue;
#------STEP-2--------
#extract label number of subject from dir_name
#format of dir name = slabel
#, so removing letter 's' from dir_name will give us label
label = int(dir_name.replace("s", ""))
#build path of directory containin images for current subject subject
#sample subject_dir_path = "training-data/s1"
subject_dir_path = data_folder_path + "/" + dir_name
#get the images names that are inside the given subject directory
subject_images_names = os.listdir(subject_dir_path)
#------STEP-3--------
#go through each image name, read image,
#detect face and add face to list of faces
for image_name in subject_images_names:
#ignore system files like .DS_Store
if image_name.startswith("."):
continue;
#build image path
#sample image path = training-data/s1/1.pgm
image_path = subject_dir_path + "/" + image_name
#read image
image = cv2.imread(image_path)
#display an image window to show the image
cv2.imshow("Training on image...", image)
cv2.waitKey(100)
#detect face
face, rect = detect_face(image)
#------STEP-4--------
#for the purpose of this tutorial
#we will ignore faces that are not detected
if face is not None:
#add face to list of faces
faces.append(face)
#add label for this face
labels.append(label)
cv2.destroyAllWindows()
cv2.waitKey(1)
cv2.destroyAllWindows()
return faces, labels
I have defined a function that takes the path, where training subjects' folders are stored, as parameter. This function follows the same 4 prepare data substeps mentioned above.
(step-1) On line 8 I am using os.listdir
method to read names of all folders stored on path passed to function as parameter. On line 10-13 I am defining labels and faces vectors.
(step-2) After that I traverse through all subjects' folder names and from each subject's folder name on line 27 I am extracting the label information. As folder names follow the sLabel
naming convention so removing the letter s
from folder name will give us the label assigned to that subject.
(step-3) On line 34, I read all the images names of of the current subject being traversed and on line 39-66 I traverse those images one by one. On line 53-54 I am using OpenCV's imshow(window_title, image)
along with OpenCV's waitKey(interval)
method to display the current image being traveresed. The waitKey(interval)
method pauses the code flow for the given interval (milliseconds), I am using it with 100ms interval so that we can view the image window for 100ms. On line 57, I detect face from the current image being traversed.
(step-4) On line 62-66, I add the detected face and label to their respective vectors.
But a function can't do anything unless we call it on some data that it has to prepare, right? Don't worry, I have got data of two beautiful and famous celebrities. I am sure you will recognize them!
Let's call this function on images of these beautiful celebrities to prepare data for training of our Face Recognizer. Below is a simple code to do that.
#let's first prepare our training data
#data will be in two lists of same size
#one list will contain all the faces
#and other list will contain respective labels for each face
print("Preparing data...")
faces, labels = prepare_training_data("training-data")
print("Data prepared")
#print total faces and labels
print("Total faces: ", len(faces))
print("Total labels: ", len(labels))
Preparing data...
Data prepared
Total faces: 23
Total labels: 23
This was probably the boring part, right? Don't worry, the fun stuff is coming up next. It's time to train our own face recognizer so that once trained it can recognize new faces of the persons it was trained on. Read? Ok then let's train our face recognizer.
As we know, OpenCV comes equipped with three face recognizers.
cv2.face.createEigenFaceRecognizer()
cv2.face.createFisherFaceRecognizer()
cv2.face.LBPHFisherFaceRecognizer()
I am going to use LBPH face recognizer but you can use any face recognizer of your choice. No matter which of the OpenCV's face recognizer you use the code will remain the same. You just have to change one line, the face recognizer initialization line given below.
#create our LBPH face recognizer
face_recognizer = cv2.face.createLBPHFaceRecognizer()
#or use EigenFaceRecognizer by replacing above line with
#face_recognizer = cv2.face.createEigenFaceRecognizer()
#or use FisherFaceRecognizer by replacing above line with
#face_recognizer = cv2.face.createFisherFaceRecognizer()
Now that we have initialized our face recognizer and we also have prepared our training data, it's time to train the face recognizer. We will do that by calling the train(faces-vector, labels-vector)
method of face recognizer.
#train our face recognizer of our training faces
face_recognizer.train(faces, np.array(labels))
Did you notice that instead of passing labels
vector directly to face recognizer I am first converting it to numpy array? This is because OpenCV expects labels vector to be a numpy
array.
Still not satisfied? Want to see some action? Next step is the real action, I promise!
Now comes my favorite part, the prediction part. This is where we actually get to see if our algorithm is actually recognizing our trained subjects's faces or not. We will take two test images of our celeberities, detect faces from each of them and then pass those faces to our trained face recognizer to see if it recognizes them.
Below are some utility functions that we will use for drawing bounding box (rectangle) around face and putting celeberity name near the face bounding box.
#function to draw rectangle on image
#according to given (x, y) coordinates and
#given width and heigh
def draw_rectangle(img, rect):
(x, y, w, h) = rect
cv2.rectangle(img, (x, y), (x+w, y+h), (0, 255, 0), 2)
#function to draw text on give image starting from
#passed (x, y) coordinates.
def draw_text(img, text, x, y):
cv2.putText(img, text, (x, y), cv2.FONT_HERSHEY_PLAIN, 1.5, (0, 255, 0), 2)
First function draw_rectangle
draws a rectangle on image based on passed rectangle coordinates. It uses OpenCV's built in function cv2.rectangle(img, topLeftPoint, bottomRightPoint, rgbColor, lineWidth)
to draw rectangle. We will use it to draw a rectangle around the face detected in test image.
Second function draw_text
uses OpenCV's built in function cv2.putText(img, text, startPoint, font, fontSize, rgbColor, lineWidth)
to draw text on image.
Now that we have the drawing functions, we just need to call the face recognizer's predict(face)
method to test our face recognizer on test images. Following function does the prediction for us.
#this function recognizes the person in image passed
#and draws a rectangle around detected face with name of the
#subject
def predict(test_img):
#make a copy of the image as we don't want to chang original image
img = test_img.copy()
#detect face from the image
face, rect = detect_face(img)
#predict the image using our face recognizer
label= face_recognizer.predict(face)
#get name of respective label returned by face recognizer
label_text = subjects[label]
#draw a rectangle around face detected
draw_rectangle(img, rect)
#draw name of predicted person
draw_text(img, label_text, rect[0], rect[1]-5)
return img
predict(face)
method. This method will return a lableNow that we have the prediction function well defined, next step is to actually call this function on our test images and display those test images to see if our face recognizer correctly recognized them. So let's do it. This is what we have been waiting for.
print("Predicting images...")
#load test images
test_img1 = cv2.imread("test-data/test1.jpg")
test_img2 = cv2.imread("test-data/test2.jpg")
#perform a prediction
predicted_img1 = predict(test_img1)
predicted_img2 = predict(test_img2)
print("Prediction complete")
#create a figure of 2 plots (one for each test image)
f, (ax1, ax2) = plt.subplots(1, 2, figsize=(10, 5))
#display test image1 result
ax1.imshow(cv2.cvtColor(predicted_img1, cv2.COLOR_BGR2RGB))
#display test image2 result
ax2.imshow(cv2.cvtColor(predicted_img2, cv2.COLOR_BGR2RGB))
#display both images
cv2.imshow("Tom cruise test", predicted_img1)
cv2.imshow("Shahrukh Khan test", predicted_img2)
cv2.waitKey(0)
cv2.destroyAllWindows()
cv2.waitKey(1)
cv2.destroyAllWindows()
Predicting images...
Prediction complete
wohooo! Is'nt it beautiful? Indeed, it is!
Face Recognition is a fascinating idea to work on and OpenCV has made it extremely simple and easy for us to code it. It just takes a few lines of code to have a fully working face recognition application and we can switch between all three face recognizers with a single line of code change. It's that simple.
Although EigenFaces, FisherFaces and LBPH face recognizers are good but there are even better ways to perform face recognition like using Histogram of Oriented Gradients (HOGs) and Neural Networks. So the more advanced face recognition algorithms are now a days implemented using a combination of OpenCV and Machine learning. I have plans to write some articles on those more advanced methods as well, so stay tuned!
Download Details:
Author: informramiz
Source Code: https://github.com/informramiz/opencv-face-recognition-python
License: MIT License
1607673140
Do you need to turn to a reputable company that offers an outstanding office cleaning and commercial cleaning services London? Maybe you need a DBS checked cleaners London? In either case, ECS Commercial Cleaning is the right choice for you. We are one of the best cleaning company ready to meet all your cleaning needs and will do so in a timely and efficient manner.
We offer office cleaning, commercial cleaning, and sanitisation and specialist cleaning London to our customers for either their home or business. We take pride in providing customized office cleaning and commercial cleaning services London, regardless the size of your facility. Our goal is to provide a 100% satisfactory experience and ensure your facility is sanitized, providing a productive and safe environment for employees and customers. We constantly stay on the cutting edge of technology to provide you with the best quality and most efficient cleaning services.
Skilled Cleaning Services
Our company has been providing skilled office and commercial cleaning services across London for over 10 years. Our experience, coupled with DBS checked cleaners London, has allowed us to make efficient use of the best processes, cleaning products, and supplies to get the job done quickly and effectively without any disruption. Whether you require regular or one-time cleaning services, we will customize cleaning program specifically according to your needs. ECS Commercial Cleaning has the resources and expertise to get the job done right the first time.
Professional Experts and Advanced Technology
We have DBS checked cleaners who are thoroughly trained and experienced in providing high quality cleaning services that dramatically decrease dust and bacteria, from your home and business. ECS Commercial Cleaning team is committed to making your home and business a cleaner and healthier place. With more than 10 years’ experience with providing office cleaning and commercial cleaning services, you can be confident that our team have all the skills required to provide hassle-free services.
Pursue the Highest Standards in Cleaning
At ECS Commercial Cleaning, we persistently follow the highest standards in sanitisation and specialist cleaning with customized programs designed to meet your needs. We have the tools and cleaning supplies to handle your cleaning and disinfecting responsibilities. Our customized cleaning plans make sure that you are getting the best service for the best price.
Our cleaning team is equipped to handle any project, big or small, at any time of day. Call us today at 0161 5462235 to learn more about how, ECS Commercial Cleaning can handle all your office cleaning and commercial cleaning responsibilities in London.
#office cleaning #commercial cleaning #cleaning #cleaning team
1679978580
In this pythonn tutorial we will learn about how to convert base 2 binary number string to integer in Python. Ever faced the challenge of converting a binary number string into an integer for smooth processing in your Python code? Python offers seamless solutions for transforming data types without a hitch, making it an ideal choice for developers worldwide.
In this article, we’ll delve into two popular methods to tackle binary string conversion: Python’s built-in int() function and the powerful bitstring library, part of the bit array package. Join us as we break down each approach, guiding you through the process of efficient typecasting in Python.
A binary number is expressed in the base-2 number system using only “0”s and “1”s. The zeros and ones are called the index of a binary number. They are also called “bits”.
A computer system only uses the binary system for computations. It is the way in which machine code is written. Circuit diagrams can be used to represent the working of binary systems.
Boolean algebra including “AND”, “OR” and “NOT” gates can be used to represent addition, subtraction, multiplications in the binary system.
We can easily convert a number that is an integer from the decimal system, that is the base-10 system to the binary system by dividing it with 2 and arranging the remainders in a bottom-to-top order.
Converting Decimal to Base-2 Numbers
A binary number can also be converted easily into a decimal number in mathematics.
Converting a Base-2 Number String to an Integer in Python
A string in python is something that is written between two quotation marks, for example, “this is a string”.
Hence, a base-2 number string is a binary number written between two quotation marks . For example-‘1010001’ is a base-2 number string.
In python, programmers can clearly define the conversion of one data type into another in their code. This is known as explicit typecasting in python.
Explicit typecasting is extremely useful. Unlike other programming languages, python has built-in functions that can perform explicit typecasting without having to write huge blocks of code for transforming one data type into another.
Explicit typecasting has many advantages. Some of them are:
We can use the int() built-in function to convert a string literal into an integer in python. Let’s look at how we can implement this :
#converting a base-2 number string into an integer.
#taking an input for a number in the form of a string
inin=input("Enter a binary number= ")
#displaying the input
print("The given input is=",inin)
#converting using int()
outout=int(inin,2)
#displaying the output
print("the number in the decimal system or as an integer is=",outout)
The output would be:
Enter a binary number= 10011
The given input is= 10011
the number in the decimal system or as an integer is= 19
Example: Converting Binary String with int() Function
The bitstring module helps in natural and easy creation of binary data. Binary data analysis and manipulation using bitstring comes in very handy along with the BitArray class.
Before we can use this module we have to install in it our system, run the following code in your command prompt:
pip install bitstring
Let’s see how we can implement this:
#converting a base-2 number string into an integer.
#importing required modules
from bitstring import BitArray
#taking an input for a number in the form of a string
inin=input("Enter a binary number= ")
#displaying the input
print("The given input is=",inin)
#converting using bitArray
outout=BitArray(bin=inin).int
#displaying the output
print("the number in the decimal system or as an integer is=",outout)
The output will be:
Enter a binary number= 0100111
The given input is= 0100111
the number in the decimal system or as an integer is= 39
Example: Converting Binary String with Bitstring Module
Throughout this tutorial, we’ve explored the process of converting binary numbers to decimal integers in Python. With Python’s extensive library of built-in functions and modules, there’s no need for manual calculations. Explicit typecasting is a powerful feature that simplifies this conversion process. We have demonstrated two methods for converting base-2 number strings to integers: using the int() function and the bitstring module.
Article source at: https://www.askpython.com