Zara  Bryant

Zara Bryant

1548821280

What does /[^ -~]/ match?

I just started writing PHP again after a few years on other languages and I read through some of my older scripts, and found this regular expression that I can't seem to remember writing, and I can't seem to find an answer for what it does.

The context is cleaning up some user input. Does it have anything to do with UTF8 or Latin character ranges?

$keyword = preg_replace('/[^ -~]/iu', '\\S{0,1}', $keyword);


#php #regex

What is GEEK

Buddha Community

Alfie Mellor

1548824444

Yes. The regular expression is replacing characters which are NOT between space and tilde … characters before space are control characters and characters after tilde are not 7bit ASCII. (Space is character number 32 and tilde 126.)

vinay bajrangi

1626424144

Why We Need To Match Kundli Before Marriage?

When a date is set for marriage between two individuals, perhaps the first thing that the two families do is go for kundli matching by date of birth. Considered one of the most significant and foremost aspects of an arranged Indian marriage, matching Kundli or horoscopes is necessary before two people decide to get hitched, whether they are having a love or arranged marriage. Since matching kundlis can help you to know about any impending problems that await you on the other side of your marriage, experienced astrologers like Dr. Vinay Bajrangi always advise their clients to match horoscopes before marriage to avoid any major mishaps in the future.
Some of the reasons why astrologers recommend using a horoscope match calculator for marriage are as follows –
Compatibility - Kundli matching is important to ascertain compatibility between two individuals in terms of their mental and physical conditions, including mindset, temper, attitude and behaviour. It is considered the basic premise for a successful union. In order to assess that there is a sufficient amount of mutual desirability between the couple, the horoscopes are also analysed for physical attraction, because that is an important tool for a long-lasting relationship.
Finances/Career - Kundli matching is also a way to know about the future prospects of your spouse like career growth, business progress, promotion, financial stability, etc.
Dashas - Sometimes, the negative impact of Dashas (planetary combinations) can ruin your future as a married couple. There is a chance that a person’s horoscope could have Dashas like the ‘Mangal Dasha’ and ‘Shani Dasha’ right since the time they are born, depending upon the stars and the planets’ positions at that time. Such Dashas can create several problems at a later stage of marriage, about which none of the partners can do anything. The process of Kundli matching can help you overcome the negative impact before taking the step towards the marriage altar.
Health – Kundli matching helps you find out about the health prospects of your children, who would be born later in your life. The 8th guna of your Kundli, Nadi, represents childbirth and potential problems that might arise out of it. Therefore, Kundli matching is significant not just for yourself but also for the future generations to come.
Love Marriage – In case of love marriages, matching kundlis or using a horoscope match calculator for marriage is equally important as it helps the couple take care of any possible negative impacts awaiting them in the future. Matching of horoscopes is a sure-shot way to resolve any impending problems for a married couple.

Astrological Combinations for Healthy Marriage

The compatibility of partners as well as their health and finances are all necessary points that should be taken care of before finalizing a match. Birth charts of the two individuals provide all the answers, provided you consult a highly experienced astrologer for the same. In case of malefic effects of the planets, your married life might feel hindrance, which can be avoided with the help of some specific astrological combinations, as given below –

• Mercury and Venus are the two most significant planets in an individual’s horoscope.
• Manglik Yoga, one of the most malefic yogas, is formed in a person’s birth-chart when Mars is in the 1st, 4th, 7th, 8th or the 12th house. This yoga can be nullified only when two persons with similar yoga get married.
• Saturn and Rahu are two other planets that hold great significance in a person’s kundli with respect to his or her marriage prospects.
• When the moon is in the 2nd house of the Venus, it is considered to be highly inauspicious for the bride’s life span.
• The lifespan of both the groom and the bride is similar when the kundli shows a connection between planets Mercury and Venus.
• A groom’s life is at risk when Mercury aspects planets Sun and Rahu. Same is the case when Venus aspects planets Sun, Moon and Rahu.
• An individual whose kundli shows Jupiter aspected by Venus or Rahu aspected by Sun and Moon can expect more than one marriage in their life.

Marriage is a holy union guided by planets and houses in your birth-chart, as well as by a multitude of astrological combinations. Talk to an ace astrologer like Dr. Vinay Bajrangi to know more about kundali matching and its various aspects. What you must understand is that avoiding kundli matching before love or arranged union could keep you unaware of the vagaries of married life, if any. An individual’s nature as well as future is predicted through their kundli and it helps to seek help from a knowledgeable astrologer before making final decision about marriage. Meanwhile, if you want to have your doubts and queries addressed as per the Vedic Astrology, you can connect with Dr. Bajrangi on vinaybajrangi.com or on phone - +91 9278665588 / 9278555588.

#kundli matching by date of birth #horoscope match calculator for marriage #free kundli matching #horoscope matching for marriage #free horoscope matching by name

What is MATCH (MATCH) | What is MATCH token | MATCH (MATCH) ICO

What is MATCH Token?

MATCH token is a utility token for the De-Fi and Decentralized Bet (De-Bet) platforms. MATCH token is developed by a group of people who want to share the equal great opportunity to the holders through a decentralized network. MATCH token leverages the decentralized (blockchain) network because it offers more greatness compared to a centralized network. They believe as a token, it can be nurtured to become a priceless token to the holders while at the same time to be used on an application that provides secure and transparent transactions on top of a smart contract designed to get all involved persons to have the same opportunities to grow their accounts actively and passively on an ecosystem

Where MATCH token is deployed and why?

MATCH token is running on top of the TRON network which is one of the largest blockchain-based operating systems in the world. The token leverages the TRON network because it offers high-throughout, high-availability, and high-scalability. Supported by 1,332 nodes across the globe, TRON can support MATCH tokens in rendering a swift transaction time with the lowest fees compared to other smart contract networks, whereas pace is one of the nowadays application requirements. TRON TPS has exceeded the TPS of Bitcoin and Ethereum, which is one of our main reasons for selecting TRON as our main blockchain network.

Can MATCH token’s value steady and tend to be increased?

MATCH token team’s vision is to grow the token’s value by using it actively and passively in an ecosystem. Thus, the value of the token can be maintained and tend to be increased over time. This vision can be gained through De-Fi and Decentralized Bet (De-Bet) platforms.

How does the MATCH token apply De-Fi in the ecosystem?

The other proven way that can be used to grow the holders’ account passively is through a liquidity stake as part of Decentralized Finance running on top of blockchain systems to apply the Automatic Market Makers (AMM) principle. The token holders can become Liquidity Providers that could earn Swap Fee and MATCH tokens through mining based on existing APY (Annual Percentage Yield). As a Liquidity Provider, you are eligible to earn a portion of fees from the Liquidity Pool as much as 0.3% from each swap activity performed in the justswap.io platform with respect to pool shares. By staking your token in the Liquidity Pool, you can be eligible to yield a MATCH token based on the current APY. The transparent calculation is performed in the background systematically without any human interference and purely mathematical logarithm and financial investment fusion.

What is Decentralized Bet (De-Bet)?

MATCH token is a token that can only be used actively on a decentralized application (DApp) named De-Bet which is a transparent and trusted sports betting. Moreover, De-Bet is the first decentralized betting for sports in the TRON network. De-Bet focuses on User Experience and the Smart Contract technology that can help Makers and Takers to play their own roles without any human’s interference, in order to get another uplifted level of amusement in the betting arena with the benefit of Blockchain technology. De-Bet will be delivered in two phases: (1) Decentralized Betting Smart Contract and (2) Decentralized Betting for other Providers in Smart Contract. For detailed information on improvement and process flow can be found in our whitepaper (link).

How can I participate in MATCH token development?

MATCH token team will initiate ICO (Initial Currency Offering) through Private Sale and Pre-Sale events.

When is the MATCH Token Private Sale started?

The Private Sale will be opened on 6-13 Dec 2020 for limited investors. Throughout this Private Sales, we expect to gain 1,250,000 MATCH tokens with 5 TRX for each MATCH token. Be the first holder of MATCH Token! By joining this private sale you will have the opportunity to get MATCH tokens at 50% of the market price.

  • Private Sale price: 5 TRX
  • Minimum contribution: 500 TRX
  • Normal price: 10 TRXQ

Roadmap

phone

When is the MATCH Token Pre-Sale started?

There will be a Pre-Sale on 20 - 27 December 2020 which will be opened to the public for enthusiastic investors. From this Pre-Sale, we expect to acquire 5,000,000 MATCH tokens with 6 TRX for each MATCH token. By joining this pre-sale you will have the opportunity to get MATCH tokens at 60% of the market price.

  • Private Sale price: 6 TRX
  • Minimum contribution: 500 TRX
  • Normal price: 10 TRX

How do I get MATCH tokens during the Sale event?

It is very simple. What you need to do is have TRX in your wallet and you can just click the “Buy MATCH token" link on our site.

Can you recommend a digital wallet?

Most people are using the TRON Link wallet. Disclaimer: The MATCH team does not endorse, recommend, or make any representations with respect to digital wallets. It’s advisable to always conduct your own due diligence before trusting any third party or third-party technology.

How can I get a MATCH token after the Sale event?

You will be able to trade MATCH once MATCH listed in Justswap.io which we aim to be listed in January 2021. Disclaimer: Please note that the MATCH team does not endorse, recommend, or make any representations with respect to Internet lists or exchanges more generally. Every exchange has a different process for trading MATCH tokens and their customer support and policies and practices may vary widely. It’s advisable to conduct your own due diligence before trusting any third party or third-party technology.

Can I get an airdrop from MATCH Token?

MATCH will announce the airdrop the very first time we launch our product, and the airdrop itself will be dropped in Q1 2021. Make sure you will not miss this opportunity, follow our community channel to keep yourself updated

Instagram : https://www.instagram.com/matchtoken/

ICO DATE: Dec 6, 2020 - Dec 27, 2020

Private Sale: Dec 6 — Dec 13, 2020

Private Sale price: 5 TRX (50% Discount from the public sale price)

Pre-Sale: Dec 20 — Dec 27, 2020

Private Sale price: 6 TRX (40% Discount from the public sale price)

Would you like to earn many tokens and cryptocurrencies right now! ☞ CLICK HERE

Looking for more information…

☞ Website
☞ Explorer
☞ Whitepaper

Create an Account and Trade Cryptocurrency NOW

Binance
Bittrex
Poloniex

Thank for visiting and reading this article! I’m highly appreciate your actions! Please share if you liked it!

#blockchain #crypto #match #match token

Royce  Reinger

Royce Reinger

1657445580

Fuzzy-string-match: Fuzzy String Matching Library for Ruby

What is fuzzy-string-match

  • fuzzy-string-match is a fuzzy string matching library for ruby.
  • It is fast. ( written in C with RubyInline )
  • It supports only Jaro-Winkler distance algorithm.
  • This program was ported by hand from lucene-3.0.2. (lucene is Java product)
  • If you want to add another string distance algorithm, please fork it on github and port by yourself.

The reason why i developed fuzzy-string-match

  • I tried amatch-0.2.5, but it contains some issues.
    1. memory leaks.
    2. I felt difficult to maintain it.
  • So, I decide to create another gem by porting lucene-3.0.x.

Installing

gem install fuzzy-string-match

Features

  • Calculate Jaro-Winkler distance of two strings.
    • Pure ruby version can handle both ASCII and UTF8 strings. (and slow)
    • Native version can only ASCII strings. (but it is fast)

Sample code

Native version

require 'fuzzystringmatch'
jarow = FuzzyStringMatch::JaroWinkler.create( :native )
p jarow.getDistance(  "jones",      "johnson" )

Pure ruby version

require 'fuzzystringmatch'
jarow = FuzzyStringMatch::JaroWinkler.create( :pure )
p jarow.getDistance(  "jones",      "johnson" )
p jarow.getDistance(  "ああ",        "あい"        )

Sample on irb

irb(main):001:0> require 'fuzzystringmatch'
require 'fuzzystringmatch'
=> true

irb(main):002:0> jarow = FuzzyStringMatch::JaroWinkler.create( :native )
jarow = FuzzyStringMatch::JaroWinkler.create( :native )
=> #<FuzzyStringMatch::JaroWinklerNative:0x000001011b0010>

irb(main):003:0> jarow.getDistance( "al",        "al"        )
jarow.getDistance( "al",        "al"        )
=> 1.0

irb(main):004:0> jarow.getDistance( "dixon",     "dicksonx"  )
jarow.getDistance( "dixon",     "dicksonx"  )
=> 0.8133333333333332

Benchmarks

$ rake bench
ruby ./benchmark/vs_amatch.rb
 --- 
 --- Each match functions will be called 1Mega times. --- 
 --- 
[Amatch]
      user     system      total        real
  1.160000   0.050000   1.210000 (  1.218259)
[this Module (pure)]
      user     system      total        real
 39.940000   0.160000  40.100000 ( 40.542448)
[this Module (native)]
      user     system      total        real
  0.480000   0.000000   0.480000 (  0.484187)

Requires

for CRuby

  • RubyInline
  • Ruby 2.0.0 or higher ( includes RubyInstaller.org's CRuby on Windows )

for JRuby

  • JRuby 1.6.6 or higher

Author

  • Copyright (C) Kiyoka Nishiyama kiyoka@sumibi.org
  • I ported from java source code of lucene-3.0.2.

See also

ChangeLog

1.0.1 / Jun 25, 2017

  • support JRuby 1.7.26(CRuby 1.9 compatible)

1.0.0 / Mar 10, 2017

  • First stable release

0.9.9 / Mar 9, 2017

  • Supported ruby version is 2.0.0 or higher(for RHEL 7.x)

0.9.8 / Mar 9, 2017

  • Supported ruby version is 2.1.0 or higher
  • Merge pull request #16 from ferdinandrosario/ferdinandrosario-patch-1 (Travis rubies updated)
  • Merge pull request #14 from timsatterfield/master (Reduce calls to strlen() in native jaro winkler)

0.9.7 / Oct 15, 2014

  • Use rspec 3.1 syntax.
  • Fixed: issue #12 Using stack allocated memory.
  • Fixed: remove duplicated dependency of gem package.

0.9.6 / Dec 21, 2013

  • New feature: fuzzy-string-match falls back into pure ruby mode when c-compile fails.
  • fuzzy-string-match_pure is obsolute gem

0.9.5 / Mar 26, 2013

  • Fixed: 'jarowinkler.rb:42: warning: implicit conversion shortens 64-bit value into a 32-bit value' on MacOS X 64bit env.

0.9.4 / July 10, 2012

  • Fixed: undefined method getDistance' error.

0.9.3 / Feb 27, 2012

  • Changed gem dependency of `rspec'. gemspec.dependency( "rspec" ) to gemspec.development_dependency( "rspec" )

0.9.2 / Feb 17, 2012

Supported JRuby platform

Divided into two gems.

  1. fuzzy-string-match ... native (RubyInline) version.
  2. fuzzy-string-match_pure ... pure ruby version

Divided rspec files into several files.

Supported testable gem Please install rubygems-test and "gem test".

0.9.1 / Jul 30, 2011

Changed gcc compiler's option for RubyInline.

Stoped to use obsolute method of RSpec.

0.9.0 / Oct 12, 2010

  • First release.

Author: Kiyoka
Source Code: https://github.com/kiyoka/fuzzy-string-match 
License: Apache-2.0 license

#ruby #string #match 

Anissa  Barrows

Anissa Barrows

1669099573

What Is Face Recognition? Facial Recognition with Python and OpenCV

In this article, we will know what is face recognition and how is different from face detection. We will go briefly over the theory of face recognition and then jump on to the coding section. At the end of this article, you will be able to make a face recognition program for recognizing faces in images as well as on a live webcam feed.

What is Face Detection?

In computer vision, one essential problem we are trying to figure out is to automatically detect objects in an image without human intervention. Face detection can be thought of as such a problem where we detect human faces in an image. There may be slight differences in the faces of humans but overall, it is safe to say that there are certain features that are associated with all the human faces. There are various face detection algorithms but Viola-Jones Algorithm is one of the oldest methods that is also used today and we will use the same later in the article. You can go through the Viola-Jones Algorithm after completing this article as I’ll link it at the end of this article.

Face detection is usually the first step towards many face-related technologies, such as face recognition or verification. However, face detection can have very useful applications. The most successful application of face detection would probably be photo taking. When you take a photo of your friends, the face detection algorithm built into your digital camera detects where the faces are and adjusts the focus accordingly.

For a tutorial on Real-Time Face detection

What is Face Recognition?

face recognition

Now that we are successful in making such algorithms that can detect faces, can we also recognise whose faces are they?

Face recognition is a method of identifying or verifying the identity of an individual using their face. There are various algorithms that can do face recognition but their accuracy might vary. Here I am going to describe how we do face recognition using deep learning.

So now let us understand how we recognise faces using deep learning. We make use of face embedding in which each face is converted into a vector and this technique is called deep metric learning. Let me further divide this process into three simple steps for easy understanding:

Face Detection: The very first task we perform is detecting faces in the image or video stream. Now that we know the exact location/coordinates of face, we extract this face for further processing ahead.
 

Feature Extraction: Now that we have cropped the face out of the image, we extract features from it. Here we are going to use face embeddings to extract the features out of the face. A neural network takes an image of the person’s face as input and outputs a vector which represents the most important features of a face. In machine learning, this vector is called embedding and thus we call this vector as face embedding. Now how does this help in recognizing faces of different persons? 
 

While training the neural network, the network learns to output similar vectors for faces that look similar. For example, if I have multiple images of faces within different timespan, of course, some of the features of my face might change but not up to much extent. So in this case the vectors associated with the faces are similar or in short, they are very close in the vector space. Take a look at the below diagram for a rough idea:

Now after training the network, the network learns to output vectors that are closer to each other(similar) for faces of the same person(looking similar). The above vectors now transform into:

We are not going to train such a network here as it takes a significant amount of data and computation power to train such networks. We will use a pre-trained network trained by Davis King on a dataset of ~3 million images. The network outputs a vector of 128 numbers which represent the most important features of a face.

Now that we know how this network works, let us see how we use this network on our own data. We pass all the images in our data to this pre-trained network to get the respective embeddings and save these embeddings in a file for the next step.

Comparing faces: Now that we have face embeddings for every face in our data saved in a file, the next step is to recognise a new t image that is not in our data. So the first step is to compute the face embedding for the image using the same network we used above and then compare this embedding with the rest of the embeddings we have. We recognise the face if the generated embedding is closer or similar to any other embedding as shown below:

So we passed two images, one of the images is of Vladimir Putin and other of George W. Bush. In our example above, we did not save the embeddings for Putin but we saved the embeddings of Bush. Thus when we compared the two new embeddings with the existing ones, the vector for Bush is closer to the other face embeddings of Bush whereas the face embeddings of Putin are not closer to any other embedding and thus the program cannot recognise him.

What is OpenCV

In the field of Artificial Intelligence, Computer Vision is one of the most interesting and Challenging tasks. Computer Vision acts like a bridge between Computer Software and visualizations around us. It allows computer software to understand and learn about the visualizations in the surroundings. For Example: Based on the color, shape and size determining the fruit. This task can be very easy for the human brain however in the Computer Vision pipeline, first we gather the data, then we perform the data processing activities and then we train and teach the model to understand how to distinguish between the fruits based on size, shape and color of fruit. 

Currently, various packages are present to perform machine learning, deep learning and computer vision tasks. By far, computer vision is the best module for such complex activities. OpenCV is an open-source library. It is supported by various programming languages such as R, Python. It runs on most of the platforms such as Windows, Linux and MacOS.

To know more about how face recognition works on opencv, check out the free course on face recognition in opencv.

Advantages of OpenCV:

  • OpenCV is an open-source library and is free of cost.
  • As compared to other libraries, it is fast since it is written in C/C++.
  • It works better on System with lesser RAM
  • To supports most of the Operating Systems such as Windows, Linux and MacOS.
  •  

Installation: 

Here we will be focusing on installing OpenCV for python only. We can install OpenCV using pip or conda(for anaconda environment). 

  1. Using pip: 

Using pip, the installation process of openCV can be done by using the following command in the command prompt.

pip install opencv-python

  1. Anaconda:

If you are using anaconda environment, either you can execute the above code in anaconda prompt or you can execute the following code in anaconda prompt.

conda install -c conda-forge opencv

Face Recognition using Python

In this section, we shall implement face recognition using OpenCV and Python. First, let us see the libraries we will need and how to install them:

  • OpenCV
  • dlib
  • Face_recognition

OpenCV is an image and video processing library and is used for image and video analysis, like facial detection, license plate reading, photo editing, advanced robotic vision, optical character recognition, and a whole lot more.
 

The dlib library, maintained by Davis King, contains our implementation of “deep metric learning” which is used to construct our face embeddings used for the actual recognition process.
 

The face_recognition  library, created by Adam Geitgey, wraps around dlib’s facial recognition functionality, and this library is super easy to work with and we will be using this in our code. Remember to install dlib library first before you install face_recognition.
 

To install OpenCV, type in command prompt 
 

pip install opencv-python

I have tried various ways to install dlib on Windows but the easiest of all of them is via Anaconda. First, install Anaconda (here is a guide to install it) and then use this command in your command prompt:
 

conda install -c conda-forge dlib

Next to install face_recognition, type in command prompt

pip install face_recognition

Now that we have all the dependencies installed, let us start coding. We will have to create three files, one will take our dataset and extract face embedding for each face using dlib. Next, we will save these embedding in a file.
 

In the next file we will compare the faces with the existing the recognise faces in images and next we will do the same but recognise faces in live webcam feed
 

Extracting features from Face

First, you need to get a dataset or even create one of you own. Just make sure to arrange all images in folders with each folder containing images of just one person.

Next, save the dataset in a folder the same as you are going to make the file. Now here is the code:

1

2

3

4

5

6

7

8

9

10

11

12

13

14

15

16

17

18

19

20

21

22

23

24

25

26

27

28

29

30

31

32

33

from imutils import paths

import face_recognition

import pickle

import cv2

import os

#get paths of each file in folder named Images

#Images here contains my data(folders of various persons)

imagePaths = list(paths.list_images('Images'))

knownEncodings = []

knownNames = []

# loop over the image paths

for (i, imagePath) in enumerate(imagePaths):

    # extract the person name from the image path

    name = imagePath.split(os.path.sep)[-2]

    # load the input image and convert it from BGR (OpenCV ordering)

    # to dlib ordering (RGB)

    image = cv2.imread(imagePath)

    rgb = cv2.cvtColor(image, cv2.COLOR_BGR2RGB)

    #Use Face_recognition to locate faces

    boxes = face_recognition.face_locations(rgb,model='hog')

    # compute the facial embedding for the face

    encodings = face_recognition.face_encodings(rgb, boxes)

    # loop over the encodings

    for encoding in encodings:

        knownEncodings.append(encoding)

        knownNames.append(name)

#save emcodings along with their names in dictionary data

data = {"encodings": knownEncodings, "names": knownNames}

#use pickle to save data into a file for later use

f = open("face_enc", "wb")

f.write(pickle.dumps(data))

f.close()

Now that we have stored the embedding in a file named “face_enc”, we can use them to recognise faces in images or live video stream.

Face Recognition in Live webcam Feed

Here is the script to recognise faces on a live webcam feed:

1

2

3

4

5

6

7

8

9

10

11

12

13

14

15

16

17

18

19

20

21

22

23

24

25

26

27

28

29

30

31

32

33

34

35

36

37

38

39

40

41

42

43

44

45

46

47

48

49

50

51

52

53

54

55

56

57

58

59

60

61

62

63

64

65

66

67

68

69

70

71

72

73

import face_recognition

import imutils

import pickle

import time

import cv2

import os

#find path of xml file containing haarcascade file

cascPathface = os.path.dirname(

 cv2.__file__) + "/data/haarcascade_frontalface_alt2.xml"

# load the harcaascade in the cascade classifier

faceCascade = cv2.CascadeClassifier(cascPathface)

# load the known faces and embeddings saved in last file

data = pickle.loads(open('face_enc', "rb").read())

print("Streaming started")

video_capture = cv2.VideoCapture(0)

# loop over frames from the video file stream

while True:

    # grab the frame from the threaded video stream

    ret, frame = video_capture.read()

    gray = cv2.cvtColor(frame, cv2.COLOR_BGR2GRAY)

    faces = faceCascade.detectMultiScale(gray,

                                         scaleFactor=1.1,

                                         minNeighbors=5,

                                         minSize=(60, 60),

                                         flags=cv2.CASCADE_SCALE_IMAGE)

    # convert the input frame from BGR to RGB

    rgb = cv2.cvtColor(frame, cv2.COLOR_BGR2RGB)

    # the facial embeddings for face in input

    encodings = face_recognition.face_encodings(rgb)

    names = []

    # loop over the facial embeddings incase

    # we have multiple embeddings for multiple fcaes

    for encoding in encodings:

       #Compare encodings with encodings in data["encodings"]

       #Matches contain array with boolean values and True for the embeddings it matches closely

       #and False for rest

        matches = face_recognition.compare_faces(data["encodings"],

         encoding)

        #set name =inknown if no encoding matches

        name = "Unknown"

        # check to see if we have found a match

        if True in matches:

            #Find positions at which we get True and store them

            matchedIdxs = [i for (i, b) in enumerate(matches) if b]

            counts = {}

            # loop over the matched indexes and maintain a count for

            # each recognized face face

            for i in matchedIdxs:

                #Check the names at respective indexes we stored in matchedIdxs

                name = data["names"][i]

                #increase count for the name we got

                counts[name] = counts.get(name, 0) + 1

            #set name which has highest count

            name = max(counts, key=counts.get)

        # update the list of names

        names.append(name)

        # loop over the recognized faces

        for ((x, y, w, h), name) in zip(faces, names):

            # rescale the face coordinates

            # draw the predicted face name on the image

            cv2.rectangle(frame, (x, y), (x + w, y + h), (0, 255, 0), 2)

            cv2.putText(frame, name, (x, y), cv2.FONT_HERSHEY_SIMPLEX,

             0.75, (0, 255, 0), 2)

    cv2.imshow("Frame", frame)

    if cv2.waitKey(1) & 0xFF == ord('q'):

        break

video_capture.release()

cv2.destroyAllWindows()

https://www.youtube.com/watch?v=fLnGdkZxRkg

Although in the example above we have used haar cascade to detect faces, you can also use face_recognition.face_locations to detect a face as we did in the previous script

Face Recognition in Images

The script for detecting and recognising faces in images is almost similar to what you saw above. Try it yourself and if you can’t take a look at the code below:

1

2

3

4

5

6

7

8

9

10

11

12

13

14

15

16

17

18

19

20

21

22

23

24

25

26

27

28

29

30

31

32

33

34

35

36

37

38

39

40

41

42

43

44

45

46

47

48

49

50

51

52

53

54

55

56

57

58

59

60

61

62

63

64

65

import face_recognition

import imutils

import pickle

import time

import cv2

import os

#find path of xml file containing haarcascade file

cascPathface = os.path.dirname(

 cv2.__file__) + "/data/haarcascade_frontalface_alt2.xml"

# load the harcaascade in the cascade classifier

faceCascade = cv2.CascadeClassifier(cascPathface)

# load the known faces and embeddings saved in last file

data = pickle.loads(open('face_enc', "rb").read())

#Find path to the image you want to detect face and pass it here

image = cv2.imread(Path-to-img)

rgb = cv2.cvtColor(image, cv2.COLOR_BGR2RGB)

#convert image to Greyscale for haarcascade

gray = cv2.cvtColor(image, cv2.COLOR_BGR2GRAY)

faces = faceCascade.detectMultiScale(gray,

                                     scaleFactor=1.1,

                                     minNeighbors=5,

                                     minSize=(60, 60),

                                     flags=cv2.CASCADE_SCALE_IMAGE)

# the facial embeddings for face in input

encodings = face_recognition.face_encodings(rgb)

names = []

# loop over the facial embeddings incase

# we have multiple embeddings for multiple fcaes

for encoding in encodings:

    #Compare encodings with encodings in data["encodings"]

    #Matches contain array with boolean values and True for the embeddings it matches closely

    #and False for rest

    matches = face_recognition.compare_faces(data["encodings"],

    encoding)

    #set name =inknown if no encoding matches

    name = "Unknown"

    # check to see if we have found a match

    if True in matches:

        #Find positions at which we get True and store them

        matchedIdxs = [i for (i, b) in enumerate(matches) if b]

        counts = {}

        # loop over the matched indexes and maintain a count for

        # each recognized face face

        for i in matchedIdxs:

            #Check the names at respective indexes we stored in matchedIdxs

            name = data["names"][i]

            #increase count for the name we got

            counts[name] = counts.get(name, 0) + 1

            #set name which has highest count

            name = max(counts, key=counts.get)

        # update the list of names

        names.append(name)

        # loop over the recognized faces

        for ((x, y, w, h), name) in zip(faces, names):

            # rescale the face coordinates

            # draw the predicted face name on the image

            cv2.rectangle(image, (x, y), (x + w, y + h), (0, 255, 0), 2)

            cv2.putText(image, name, (x, y), cv2.FONT_HERSHEY_SIMPLEX,

             0.75, (0, 255, 0), 2)

    cv2.imshow("Frame", image)

    cv2.waitKey(0)

Output:

InputOutput

This brings us to the end of this article where we learned about face recognition.

You can also upskill with Great Learning’s PGP Artificial Intelligence and Machine Learning Course. The course offers mentorship from industry leaders, and you will also have the opportunity to work on real-time industry-relevant projects.


Original article source at: https://www.mygreatlearning.com

#python #opencv 

HI Python

HI Python

1623854880

Pattern Matching in Python 3.10

The Switch statement on steroids

Python 3.10 has implemented the _switch _statement — sort of. The switch statement in other languages such as C or Java does a simple value match on a variable and executes code depending on that value.

It can be used as a simple switch statement but is capable of much more.

That might be good enough for C but this is Python, and Python 3.10 implements a much more powerful and flexible construct called Structural Pattern Matching. It can be used as a simple switch statement but is capable of much more.

#switch-statement #python #hands-on-tutorials #pattern-matching #pattern matching in python 3.10 #python 3.10