Garry Taylor

Garry Taylor

1573548422

Fake Face Generator Using DCGAN Model

Overview

In the following article, we will define and train a Deep Convolutional Generative Adversarial Network_(DCGAN)_ model on a dataset of faces. The main objective of the model is to get a Generator Network to generate new images of fake human faces that look as realistic as possible.

To do so, we will first try to understand the intuition behind the working of **GAN**s and **DCGAN**s and then combine this knowledge to build a Fake Face Generator Model. By the end of this post, you will be able to generate your fake samples on any given dataset, using the concepts from this article.

Introduction

The following article is divided into two sections:

  • Theory — Understanding the intuition behind the working of _GAN_s and _DCGAN_s.
  • Practical — Implementing Fake Face Generator in Pytorch.

This article would be covering both sections. So let’s begin the journey….

Theory

Intuition Behind Generative Adversarial Networks(GANs)

Fake Face Generator Using DCGAN Model

  • Definition

_GAN_s, in general, can be defined as a generative model that lets us generate a whole image in parallel. Along with several other kinds of generative models, _GAN_s uses a differentiable function represented by a neural network as a Generator Network.

  • Generator Network

The Generator Network takes random noise as input, then runs the noise through the differentiable function_(neural network)_ to transform the noise and reshape it to have a recognizable structure similar to the images in the training dataset. The output of the Generator is determined by the choice of the input random noise. Running the Generator Network over several different random input noises results in different realistic output images.

The end goal of the Generator is to learn a distribution similar to the distribution of the training dataset to sample out realistic images. To be able to do so, the Generator Network needs to be trained. The training process of the GAN_s is very different, compared to other generative models( Most generative models are trained by adjusting the parameters to maximize the probability of Generator to generate realistic samples.For-eg Variational Auto-Encoders(VAE)). GAN_s, on the other hand, uses a second network to train the Generator, called Discriminator Network.

  • Discriminator Network

Discriminator Network is a basic classifier network that outputs the probability of an image to be real. So, during the training process, the Discriminator Network is shown real images from the training set half the time and fake images from Generator another half of the time. The Discriminator target is to assign a probability near 1, for real images and probability near 0, for fake images.

On the other hand, Generator tries the opposite, its target is to generate fake images, for which Discriminator would result in a probability close to 1_(considering them to be real images from the training set)_. As training goes on, the Discriminator will become better at classifying real and fake images. So to fool the Discriminator, Generator will be forced to improve to produce more realistic samples. So we can say that:

GANs can be considered as a two-player(Generator and Discriminator)non-cooperative game, where each player wishes to minimize its cost function.

Difference Between GANs and DCGANs

_DCGAN_s are very similar to _GAN_s but specifically focuses on using deep convolutional networks in place of fully-connected networks used in Vanilla _GAN_s.

Convolutional networks help in finding deep correlation within an image, that is they look for spatial correlation. This means DCGAN would be a better option for image/video data, whereas _GAN_s can be considered as a general idea on which _DCGAN_s and many other architectures (CGAN, CycleGAN, StarGAN and many others) have been developed.

In this article, we are mainly working on image data, this means that DCGAN would be a better option compared to Vanilla GAN. So from now on, we will be mainly focussing on DCGANs.

Some Tips For Training DCGANs

All the training tips can also be applied toVanilla GAN_s_ as well_._

  • Make sure that both Discriminator and Generator have at least one hidden layer. This makes sure both the models have a Universal Approximation Property.

Universal Approximation Propertystates that a feed-forward network with a single hidden layer containing a finite number of hidden units can approximate any probability distribution, given enough hidden units.

  • For the hidden units, many activation functions could work, but Leaky ReLUs are the most popular. Leaky ReLUs make sure that gradient flows through the entire architecture. This is very important for _DCGAN_s because the only way the Generator can learn is to receive a gradient from the Discriminator.
  • One of the most popular activation function for the output of the Generator Network is the Tangent Hyperbolic Activation Function_(Based on the_ Improved Training Techniques For GANs paper).
  • As the Discriminator is a binary classifier, we will be using the Sigmoid Activation Function to get the final probability.

So far, we have talked about the working intuition and some tips and tricks for training _GAN_s/_DCGAN_s. But still, many questions are left unanswered. Some of them are:

Which optimizer to choose? How is the cost function defined? How long does a network need to be trained? and many others, that would be covered in the Practical section.

Practical

The implementation part is broken down into a series of tasks from loading data to defining and training adversarial networks. At the end of this section, you’ll be able to visualize the results of your trained Generator to see how it performs; your generated samples should look fairly like realistic faces with small amounts of noise.
Fake Face Generator Using DCGAN Model

(1)Get The Data

You’ll be using the CelebFaces Attributes Dataset (CelebA) to train your adversarial networks. This data is a more complex dataset compared to the MNIST. So, we need to define a deeper network_(DCGAN)_ to generate good results. I would suggest you use a GPU for training purposes.

(2)Preparing Data

As the main objective of this article is building a DCGAN model, so instead of doing the preprocessing ourselves, we will be using a pre-processed dataset. You can download the smaller subset of the CelebA dataset from here. And if you are interested in doing the pre-processing, do the following:

  • Crop images to remove the part that doesn’t include the face.
  • Resize them into 64x64x3 NumPy images.

Now, we will create a DataLoader to access the images in batches.

def get_dataloader(batch_size, image_size, data_dir='train/'):
    """
    Batch the neural network data using DataLoader
    :param batch_size: The size of each batch; the number of images in a batch
    :param img_size: The square size of the image data (x, y)
    :param data_dir: Directory where image data is located
    :return: DataLoader with batched data
    """
    transform = transforms.Compose([transforms.Resize(image_size),transforms.CenterCrop(image_size),transforms.ToTensor()])
  
    dataset = datasets.ImageFolder(data_dir,transform = transform)
    
    dataloader = torch.utils.data.DataLoader(dataset = dataset,batch_size = batch_size,shuffle = True)
    return dataloader# Define function hyperparameters
batch_size = 256
img_size = 32# Call your function and get a dataloader
celeba_train_loader = get_dataloader(batch_size, img_size)

DataLoader hyperparameters:

  • You can decide on any reasonable batch_size parameter.
  • However, your image_size must be 32. Resizing the data to a smaller size will make for faster training, while still creating convincing images of faces.

Next, we will write some code to get a visual representation of the dataset.

def imshow(img):
    npimg = img.numpy()
    plt.imshow(np.transpose(npimg, (1, 2, 0)))# obtain one batch of training images
dataiter = iter(celeba_train_loader)
images, _ = dataiter.next() # _ for no labels# plot the images in the batch, along with the corresponding labels
fig = plt.figure(figsize=(20, 4))
plot_size=20
for idx in np.arange(plot_size):
    ax = fig.add_subplot(2, plot_size/2, idx+1, xticks=[], yticks=[])
    imshow(images[idx])

Keep in mind to convert the Tensor images into a NumPy type and transpose the dimensions to correctly display an image based on the above code_(In Dataloader we transformed the images to Tensor)_. Run this piece of code to get a visualization of the dataset.

Fake Face Generator Using DCGAN Model

Now before beginning with the next section_(Defining Model),_ we will write a function to scale the image data to a pixel range of -1 to 1 which we will use while training. The reason behind doing so is that the output of a tanh activated generator will contain pixel values in a range from -1 to 1, and so, we need to rescale our training images to a range of -1 to 1_(right now, they are in the range 0–1)_.

def scale(x, feature_range=(-1, 1)):
    ''' Scale takes in an image x and returns that image, scaled
       with a feature_range of pixel values from -1 to 1. 
       This function assumes that the input x is already scaled from 0-1.'''
    # assume x is scaled to (0, 1)
    # scale to feature_range and return scaled x
    min, max = feature_range
    x = x*(max-min) + min
    return x

(3)Defining Model

A GAN is comprised of two adversarial networks, a discriminator and a generator. So in this section, we will define architectures for both of them.

Discriminator

This is a convolutional classifier, only without any MaxpPooling layers. Here is the code for the Discriminator Network.

def conv(input_c,output,kernel_size,stride = 2,padding  = 1, batch_norm = True):
    layers =[]
    con = nn.Conv2d(input_c,output,kernel_size,stride,padding,bias = False)
    layers.append(con)
    
    if batch_norm:
        layers.append(nn.BatchNorm2d(output))
    
    return nn.Sequential(*layers)class Discriminator(nn.Module):def __init__(self, conv_dim):
        """
        Initialize the Discriminator Module
        :param conv_dim: The depth of the first convolutional layer
        """
        #complete init functionsuper(Discriminator, self).__init__()
        self.conv_dim = conv_dim
        self.layer_1 = conv(3,conv_dim,4,batch_norm = False) #16
        self.layer_2 = conv(conv_dim,conv_dim*2,4) #8
        self.layer_3 = conv(conv_dim*2,conv_dim*4,4) #4
        self.fc = nn.Linear(conv_dim*4*4*4,1)def forward(self, x):
        """
        Forward propagation of the neural network
        :param x: The input to the neural network     
        :return: Discriminator logits; the output of the neural network
        """
        # define feedforward behavior
        x = F.leaky_relu(self.layer_1(x))
        x = F.leaky_relu(self.layer_2(x))
        x = F.leaky_relu(self.layer_3(x))
        x = x.view(-1,self.conv_dim*4*4*4)
        x = self.fc(x)
        return x

Explanation

  • The following architecture consists of three convolutional layers and a final fully connected layer, which output a single logit. This logit defines whether the image is real or not.
  • Each convolution layer, except the first one, is followed by a Batch Normalization(defined in conv helper function).
  • For the hidden units, we have used the leaky ReLU activation function as discussed in the theory section.
  • After each convolution layer, the height and width become half. For-eg After the first convolution 32X32 images will be resized into 16X16 and so on.

Output dimension can be calculated using the following formula:
Fake Face Generator Using DCGAN Model

where O is the output height/length, W is the input height/length, K is the filter size, P is the padding, and S is the stride.

  • The number of feature maps after each convolution is based on the parameter conv_dim(In my implementation conv_dim = 64).

In this model definition, we haven’t applied the Sigmoid activation function on the final output logit. This is because of the choice of our loss function. Here instead of using the normal BCE(Binary Cross-Entropy Loss), we will be using BCEWithLogitLoss, which is considered as a numerically stable version of BCE. BCEWithLogitLoss is defined such that it first applies Sigmoid activation function on the logit and then calculates the loss, unlike BCE. You can read more about these loss functions here.

Generator

The generator should upsample an input and generate a new image of the same size as our training data 32X32X3. For doing so we will be using transpose convolutional layers. Here is the code for the Generator Network.

def deconv(input_c,output,kernel_size,stride = 2, padding =1, batch_norm = True):
    layers = []
    decon = nn.ConvTranspose2d(input_c,output,kernel_size,stride,padding,bias = False)
    layers.append(decon)
    
    if batch_norm:
        layers.append(nn.BatchNorm2d(output))
    return nn.Sequential(*layers)class Generator(nn.Module):
    
    def __init__(self, z_size, conv_dim):
        """
        Initialize the Generator Module
        :param z_size: The length of the input latent vector, z
        :param conv_dim: The depth of the inputs to the *last* transpose convolutional layer
        """
        super(Generator, self).__init__()
        # complete init function
        self.conv_dim = conv_dim
        self.fc = nn.Linear(z_size,conv_dim*8*2*2)
        self.layer_1 = deconv(conv_dim*8,conv_dim*4,4) #4
        self.layer_2 = deconv(conv_dim*4,conv_dim*2,4) #8
        self.layer_3 = deconv(conv_dim*2,conv_dim,4) #16
        self.layer_4 = deconv(conv_dim,3,4,batch_norm = False) #32
        
        
    def forward(self, x):
        """
        Forward propagation of the neural network
        :param x: The input to the neural network     
        :return: A 32x32x3 Tensor image as output
        """
        # define feedforward behavior
        x = self.fc(x)
        x = x.view(-1,self.conv_dim*8,2,2) #(batch_size,depth,width,height)
        x = F.relu(self.layer_1(x))
        x = F.relu(self.layer_2(x))
        x = F.relu(self.layer_3(x))
        x = torch.tanh(self.layer_4(x))
        return x

Explanation

  • The following architecture consists of a fully connected layer followed by four transpose convolution layers. This architecture is defined such that the output after the fourth transpose convolution layer results in an image of dimension 32X32X3_(size of an image from training dataset)._
  • The inputs to the generator are vectors of some length z_size(z_size is the noise vector).
  • Each transpose convolution layer, except the last one, is followed by a Batch Normalization(defined in deconv helper function).
  • For the hidden units, we have used the ReLU activation function.
  • After each transpose convolution layer, the height and width become double. For-eg After the first transpose convolution 2X2 images will be resized into 4X4 and so on.

Can be calculated using the following formula:

_# Padding==Same: H = H1 * stride_

_# Padding==Valid H = (H1-1) * stride + HF_

where H = output size, H1 = input size, HF = filter size.

  • The number of feature maps after each transpose convolution is based on the parameter conv_dim(In my implementation conv_dim = 64).

(4)Initialize The Weights Of Your Networks

To help the models converge, I initialized the weights of the convolutional and linear layers in the model based on the original DCGAN paper, which says: All weights are initialized from a zero-centered Normal distribution with a standard deviation of 0.02.

def weights_init_normal(m):
    """
    Applies initial weights to certain layers in a model .
    The weights are taken from a normal distribution 
    with mean = 0, std dev = 0.02.
    :param m: A module or layer in a network    
    """
    # classname will be something like:
    # `Conv`, `BatchNorm2d`, `Linear`, etc.
    classname = m.__class__.__name__
    
    if hasattr(m,'weight') and (classname.find('Conv') != -1 or classname.find('Linear') != -1):
        
        m.weight.data.normal_(0.0,0.02)
    
        if hasattr(m,'bias') and m.bias is not None:
            m.bias.data.zero_()
  • This would initialize the weights to a normal distribution, centered around 0, with a standard deviation of 0.02.
  • The bias terms, if they exist, maybe left alone or set to 0.

(5)Build Complete Network

Define your models’ hyperparameters and instantiate the discriminator and generator from the classes defined in the Defining Model section. Here is the code for that.

def build_network(d_conv_dim, g_conv_dim, z_size):
    # define discriminator and generator
    D = Discriminator(d_conv_dim)
    G = Generator(z_size=z_size, conv_dim=g_conv_dim)# initialize model weights
    D.apply(weights_init_normal)
    G.apply(weights_init_normal)print(D)
    print()
    print(G)
    
    return D, G
   
# Define model hyperparams
d_conv_dim = 64
g_conv_dim = 64
z_size = 100D, G = build_network(d_conv_dim, g_conv_dim, z_size)

When you run the above code you get the following output. It also describes the model architecture for the Discriminator and Generator models.
Fake Face Generator Using DCGAN Model

(6)Training Process

The training process comprises defining the loss functions, selecting optimizer and finally training the model.

Discriminator And Generator Loss

Discriminator Loss

  • For the discriminator, the total loss is the sum of (d_real_loss + d_fake_loss), where d_real_loss is the loss obtained on images from the training data and d_fake_loss is the loss obtained on images generated from Generator Network. For-eg

z — Noise vector

i — Image from the training set

G(z) — Generated image

D(G(z)) — Discriminator output on a generated image

D(i) — Discriminator output on a training dataset image

Loss = real_loss(D(i)) + fake_loss(D(G(z)))

  • Remember that we want the Discriminator to output 1 for real images and 0 for fake images, so we need to set up the losses to reflect that_(Keep this line in mind while reading the code below)_.

Generator Loss

  • The Generator loss will look similar only with flipped labels. The generator’s goal is to get the discriminator to think its generated images are real. For-eg

z — Noise vector

G(z) — Generated Image

D(G(z)) — Discriminator output on a generated image

Loss = real_loss(D(G(z)))

Here is the code for real_loss and fake_loss

def real_loss(D_out):
    '''Calculates how close discriminator outputs are to being real.
       param, D_out: discriminator logits
       return: real loss'''
    batch_size = D_out.size(0)
    labels = torch.ones(batch_size)
    if train_on_gpu:
        labels = labels.cuda()
    criterion = nn.BCEWithLogitsLoss()
    loss = criterion(D_out.squeeze(),labels)
    return lossdef fake_loss(D_out):
    '''Calculates how close discriminator outputs are to being fake.
       param, D_out: discriminator logits
       return: fake loss'''
    batch_size = D_out.size(0)
    labels = torch.zeros(batch_size)
    if train_on_gpu:
        labels = labels.cuda()
    criterion =  nn.BCEWithLogitsLoss()
    loss = criterion(D_out.squeeze(),labels)
    return loss

Optimizers

For _GAN_s we define two optimizers, one for the Generator and another for the Discriminator. The idea is to run them simultaneously to keep improving both the networks. In this implementation, I have used Adam optimizer in both cases. To know more about different optimizers, refer to this link.

# Create optimizers for the discriminator D and generator G
d_optimizer = optim.Adam(D.parameters(),lr = .0002, betas = [0.5,0.999])
g_optimizer = optim.Adam(G.parameters(),lr = .0002, betas = [0.5,0.999])

Learning rate(lr) and betas values are based on the original DCGAN paper.

Training

Training will involve alternating between training the discriminator and the generator. We’ll use the real_loss and fake_loss functions defined earlier, to help us in calculating the Discriminator and Generator losses.

  • You should train the discriminator by alternating on real and fake images
  • Then the generator, which tries to trick the discriminator and should have an opposing loss function

Here is the code for training.

def train(D, G, n_epochs, print_every=50):
    '''Trains adversarial networks for some number of epochs
       param, D: the discriminator network
       param, G: the generator network
       param, n_epochs: number of epochs to train for
       param, print_every: when to print and record the models' losses
       return: D and G losses'''
    
    # move models to GPU
    if train_on_gpu:
        D.cuda()
        G.cuda()# keep track of loss and generated, "fake" samples
    samples = []
    losses = []# Get some fixed data for sampling. These are images that are held
    # constant throughout training, and allow us to inspect the model's performance
    sample_size=16
    fixed_z = np.random.uniform(-1, 1, size=(sample_size, z_size))
    fixed_z = torch.from_numpy(fixed_z).float()
    # move z to GPU if available
    if train_on_gpu:
        fixed_z = fixed_z.cuda()# epoch training loop
    for epoch in range(n_epochs):# batch training loop
        for batch_i, (real_images, _) in enumerate(celeba_train_loader):batch_size = real_images.size(0)
            real_images = scale(real_images)
            if train_on_gpu:
                real_images = real_images.cuda()
          
            # 1. Train the discriminator on real and fake ima.ges
            d_optimizer.zero_grad()
            d_out_real = D(real_images)
            z = np.random.uniform(-1,1,size = (batch_size,z_size))
            z = torch.from_numpy(z).float()
            if train_on_gpu:
                z = z.cuda()
            d_loss = real_loss(d_out_real) + fake_loss(D(G(z)))
            d_loss.backward()
            d_optimizer.step()
            # 2. Train the generator with an adversarial loss
            G.train()
            g_optimizer.zero_grad()
            z = np.random.uniform(-1,1,size = (batch_size,z_size))
            z = torch.from_numpy(z).float()
            if train_on_gpu:
                z = z.cuda()
            g_loss = real_loss(D(G(z)))
            g_loss.backward()
            g_optimizer.step()
            
            # Print some loss stats
            if batch_i % print_every == 0:
                # append discriminator loss and generator loss
                losses.append((d_loss.item(), g_loss.item()))
                # print discriminator and generator loss
                print('Epoch [{:5d}/{:5d}] | d_loss: {:6.4f} | g_loss: {:6.4f}'.format(
                        epoch+1, n_epochs, d_loss.item(), g_loss.item()))## AFTER EACH EPOCH##    
        # this code assumes your generator is named G, feel free to change the name
        # generate and save sample, fake images
        G.eval() # for generating samples
        samples_z = G(fixed_z)
        samples.append(samples_z)
        G.train() # back to training mode# Save training generator samples
    with open('train_samples.pkl', 'wb') as f:
        pkl.dump(samples, f)
    
    # finally return losses
    return losses
    
    
# set number of epochs 
n_epochs = 40# call training function
losses = train(D, G, n_epochs=n_epochs)

The training is performed over 40 epochs using a GPU, that’s why I had to move my models and inputs from CPU to GPU.

(6)Results

  • The following is the plot of the training losses for the Generator and Discriminator recorded after each epoch.
    Fake Face Generator Using DCGAN Model
    The high fluctuation in the Generator training loss is because the input to the Generator Network is a batch of random noise vectors (each of z_size), each sampled from a uniform distribution of (-1,1) to generate new images for each epoch.

In the discriminator plot, we can observe a rise in the training loss (around 50 on the x-axis) followed by a gradual decrease till the end, this is because the Generator has started to generate some realistic image that fooled the Discriminator, leading to increase in error. But slowly as the training progresses, Discriminator becomes better at classifying fake and real images, leading to a gradual decrease in training error.

  • Generated samples after 40 epochs.
    Fake Face Generator Using DCGAN Model

Our model was able to generate new images of fake human faces that look as realistic as possible. We can also observe that all images are lighter in shade, even the brown faces are bit lighter. This is because the CelebA dataset is biased; it consists of “celebrity” faces that are mostly white. That being said, DCGAN successfully generates near-real images from mere noise.

Fake Face Generator Using DCGAN Model

#machine-learning #ai #data-science #deep-learning

What is GEEK

Buddha Community

Fake Face Generator Using DCGAN Model

A Lightweight Face Recognition and Facial Attribute Analysis

deepface

Deepface is a lightweight face recognition and facial attribute analysis (age, gender, emotion and race) framework for python. It is a hybrid face recognition framework wrapping state-of-the-art models: VGG-Face, Google FaceNet, OpenFace, Facebook DeepFace, DeepID, ArcFace and Dlib.

Experiments show that human beings have 97.53% accuracy on facial recognition tasks whereas those models already reached and passed that accuracy level.

Installation

The easiest way to install deepface is to download it from PyPI. It's going to install the library itself and its prerequisites as well. The library is mainly based on TensorFlow and Keras.

pip install deepface

Then you will be able to import the library and use its functionalities.

from deepface import DeepFace

Facial Recognition - Demo

A modern face recognition pipeline consists of 5 common stages: detect, align, normalize, represent and verify. While Deepface handles all these common stages in the background, you don’t need to acquire in-depth knowledge about all the processes behind it. You can just call its verification, find or analysis function with a single line of code.

Face Verification - Demo

This function verifies face pairs as same person or different persons. It expects exact image paths as inputs. Passing numpy or based64 encoded images is also welcome. Then, it is going to return a dictionary and you should check just its verified key.

result = DeepFace.verify(img1_path = "img1.jpg", img2_path = "img2.jpg")

Face recognition - Demo

Face recognition requires applying face verification many times. Herein, deepface has an out-of-the-box find function to handle this action. It's going to look for the identity of input image in the database path and it will return pandas data frame as output.

df = DeepFace.find(img_path = "img1.jpg", db_path = "C:/workspace/my_db")

Face recognition models - Demo

Deepface is a hybrid face recognition package. It currently wraps many state-of-the-art face recognition models: VGG-Face , Google FaceNet, OpenFace, Facebook DeepFace, DeepID, ArcFace and Dlib. The default configuration uses VGG-Face model.

models = ["VGG-Face", "Facenet", "Facenet512", "OpenFace", "DeepFace", "DeepID", "ArcFace", "Dlib"]

#face verification
result = DeepFace.verify(img1_path = "img1.jpg", img2_path = "img2.jpg", model_name = models[1])

#face recognition
df = DeepFace.find(img_path = "img1.jpg", db_path = "C:/workspace/my_db", model_name = models[1])

FaceNet, VGG-Face, ArcFace and Dlib are overperforming ones based on experiments. You can find out the scores of those models below on both Labeled Faces in the Wild and YouTube Faces in the Wild data sets declared by its creators.

ModelLFW ScoreYTF Score
Facenet51299.65%-
ArcFace99.41%-
Dlib99.38 %-
Facenet99.20%-
VGG-Face98.78%97.40%
Human-beings97.53%-
OpenFace93.80%-
DeepID-97.05%

Similarity

Face recognition models are regular convolutional neural networks and they are responsible to represent faces as vectors. We expect that a face pair of same person should be more similar than a face pair of different persons.

Similarity could be calculated by different metrics such as Cosine Similarity, Euclidean Distance and L2 form. The default configuration uses cosine similarity.

metrics = ["cosine", "euclidean", "euclidean_l2"]

#face verification
result = DeepFace.verify(img1_path = "img1.jpg", img2_path = "img2.jpg", distance_metric = metrics[1])

#face recognition
df = DeepFace.find(img_path = "img1.jpg", db_path = "C:/workspace/my_db", distance_metric = metrics[1])

Euclidean L2 form seems to be more stable than cosine and regular Euclidean distance based on experiments.

Facial Attribute Analysis - Demo

Deepface also comes with a strong facial attribute analysis module including age, gender, facial expression (including angry, fear, neutral, sad, disgust, happy and surprise) and race (including asian, white, middle eastern, indian, latino and black) predictions.

obj = DeepFace.analyze(img_path = "img4.jpg", actions = ['age', 'gender', 'race', 'emotion'])

Age model got ± 4.65 MAE; gender model got 97.44% accuracy, 96.29% precision and 95.05% recall as mentioned in its tutorial.

Streaming and Real Time Analysis - Demo

You can run deepface for real time videos as well. Stream function will access your webcam and apply both face recognition and facial attribute analysis. The function starts to analyze a frame if it can focus a face sequantially 5 frames. Then, it shows results 5 seconds.

DeepFace.stream(db_path = "C:/User/Sefik/Desktop/database")

Even though face recognition is based on one-shot learning, you can use multiple face pictures of a person as well. You should rearrange your directory structure as illustrated below.

user
├── database
│   ├── Alice
│   │   ├── Alice1.jpg
│   │   ├── Alice2.jpg
│   ├── Bob
│   │   ├── Bob.jpg

Face Detectors - Demo

Face detection and alignment are important early stages of a modern face recognition pipeline. Experiments show that just alignment increases the face recognition accuracy almost 1%. OpenCV, SSD, Dlib, MTCNN and RetinaFace detectors are wrapped in deepface.

All deepface functions accept an optional detector backend input argument. You can switch among those detectors with this argument. OpenCV is the default detector.

backends = ['opencv', 'ssd', 'dlib', 'mtcnn', 'retinaface']

#face verification
obj = DeepFace.verify(img1_path = "img1.jpg", img2_path = "img2.jpg", detector_backend = backends[4])

#face recognition
df = DeepFace.find(img_path = "img.jpg", db_path = "my_db", detector_backend = backends[4])

#facial analysis
demography = DeepFace.analyze(img_path = "img4.jpg", detector_backend = backends[4])

#face detection and alignment
face = DeepFace.detectFace(img_path = "img.jpg", target_size = (224, 224), detector_backend = backends[4])

Face recognition models are actually CNN models and they expect standard sized inputs. So, resizing is required before representation. To avoid deformation, deepface adds black padding pixels according to the target size argument after detection and alignment.

RetinaFace and MTCNN seem to overperform in detection and alignment stages but they are much slower. If the speed of your pipeline is more important, then you should use opencv or ssd. On the other hand, if you consider the accuracy, then you should use retinaface or mtcnn.

The performance of RetinaFace is very satisfactory even in the crowd as seen in the following illustration. Besides, it comes with an incredible facial landmark detection performance. Highlighted red points show some facial landmarks such as eyes, nose and mouth. That's why, alignment score of RetinaFace is high as well.

You can find out more about RetinaFace on this repo.

API - Demo

Deepface serves an API as well. You can clone /api/api.py and pass it to python command as an argument. This will get a rest service up. In this way, you can call deepface from an external system such as mobile app or web.

python api.py

Face recognition, facial attribute analysis and vector representation functions are covered in the API. You are expected to call these functions as http post methods. Service endpoints will be http://127.0.0.1:5000/verify for face recognition, http://127.0.0.1:5000/analyze for facial attribute analysis, and http://127.0.0.1:5000/represent for vector representation. You should pass input images as base64 encoded string in this case. Here, you can find a postman project.

Tech Stack - Vlog, Tutorial

Face recognition models represent facial images as vector embeddings. The idea behind facial recognition is that vectors should be more similar for same person than different persons. The question is that where and how to store facial embeddings in a large scale system. Herein, deepface offers a represention function to find vector embeddings from facial images.

embedding = DeepFace.represent(img_path = "img.jpg", model_name = 'Facenet')

Tech stack is vast to store vector embeddings. To determine the right tool, you should consider your task such as face verification or face recognition, priority such as speed or confidence, and also data size.

Contribution

Pull requests are welcome. You should run the unit tests locally by running test/unit_tests.py. Please share the unit test result logs in the PR. Deepface is currently compatible with TF 1 and 2 versions. Change requests should satisfy those requirements both.

Support

There are many ways to support a project - starring⭐️ the GitHub repo is just one 🙏

You can also support this work on Patreon

 

Citation

Please cite deepface in your publications if it helps your research. Here are its BibTeX entries:

@inproceedings{serengil2020lightface,
  title        = {LightFace: A Hybrid Deep Face Recognition Framework},
  author       = {Serengil, Sefik Ilkin and Ozpinar, Alper},
  booktitle    = {2020 Innovations in Intelligent Systems and Applications Conference (ASYU)},
  pages        = {23-27},
  year         = {2020},
  doi          = {10.1109/ASYU50717.2020.9259802},
  url          = {https://doi.org/10.1109/ASYU50717.2020.9259802},
  organization = {IEEE}
}
@inproceedings{serengil2021lightface,
  title        = {HyperExtended LightFace: A Facial Attribute Analysis Framework},
  author       = {Serengil, Sefik Ilkin and Ozpinar, Alper},
  booktitle    = {2021 International Conference on Engineering and Emerging Technologies (ICEET)},
  pages        = {1-4},
  year         = {2021},
  doi          = {10.1109/ICEET53442.2021.9659697},
  url.         = {https://doi.org/10.1109/ICEET53442.2021.9659697},
  organization = {IEEE}
}

Also, if you use deepface in your GitHub projects, please add deepface in the requirements.txt.

Author: Serengil
Source Code: https://github.com/serengil/deepface 
License: MIT License

#python #machine-learning 

Dominic  Feeney

Dominic Feeney

1648217849

Deepface: A Face Recognition and Facial Attribute Analysis for Python

deepface

Deepface is a lightweight face recognition and facial attribute analysis (age, gender, emotion and race) framework for python. It is a hybrid face recognition framework wrapping state-of-the-art models: VGG-Face, Google FaceNet, OpenFace, Facebook DeepFace, DeepID, ArcFace and Dlib.

Experiments show that human beings have 97.53% accuracy on facial recognition tasks whereas those models already reached and passed that accuracy level.

Installation

The easiest way to install deepface is to download it from PyPI. It's going to install the library itself and its prerequisites as well. The library is mainly powered by TensorFlow and Keras.

pip install deepface

Then you will be able to import the library and use its functionalities.

from deepface import DeepFace

Facial Recognition - Demo

A modern face recognition pipeline consists of 5 common stages: detect, align, normalize, represent and verify. While Deepface handles all these common stages in the background, you don’t need to acquire in-depth knowledge about all the processes behind it. You can just call its verification, find or analysis function with a single line of code.

Face Verification - Demo

This function verifies face pairs as same person or different persons. It expects exact image paths as inputs. Passing numpy or based64 encoded images is also welcome. Then, it is going to return a dictionary and you should check just its verified key.

result = DeepFace.verify(img1_path = "img1.jpg", img2_path = "img2.jpg")

Face recognition - Demo

Face recognition requires applying face verification many times. Herein, deepface has an out-of-the-box find function to handle this action. It's going to look for the identity of input image in the database path and it will return pandas data frame as output.

df = DeepFace.find(img_path = "img1.jpg", db_path = "C:/workspace/my_db")

Face recognition models - Demo

Deepface is a hybrid face recognition package. It currently wraps many state-of-the-art face recognition models: VGG-Face , Google FaceNet, OpenFace, Facebook DeepFace, DeepID, ArcFace and Dlib. The default configuration uses VGG-Face model.

models = ["VGG-Face", "Facenet", "Facenet512", "OpenFace", "DeepFace", "DeepID", "ArcFace", "Dlib"]

#face verification
result = DeepFace.verify(img1_path = "img1.jpg", img2_path = "img2.jpg", model_name = models[1])

#face recognition
df = DeepFace.find(img_path = "img1.jpg", db_path = "C:/workspace/my_db", model_name = models[1])

FaceNet, VGG-Face, ArcFace and Dlib are overperforming ones based on experiments. You can find out the scores of those models below on both Labeled Faces in the Wild and YouTube Faces in the Wild data sets declared by its creators.

ModelLFW ScoreYTF Score
Facenet51299.65%-
ArcFace99.41%-
Dlib99.38 %-
Facenet99.20%-
VGG-Face98.78%97.40%
Human-beings97.53%-
OpenFace93.80%-
DeepID-97.05%

Similarity

Face recognition models are regular convolutional neural networks and they are responsible to represent faces as vectors. We expect that a face pair of same person should be more similar than a face pair of different persons.

Similarity could be calculated by different metrics such as Cosine Similarity, Euclidean Distance and L2 form. The default configuration uses cosine similarity.

metrics = ["cosine", "euclidean", "euclidean_l2"]

#face verification
result = DeepFace.verify(img1_path = "img1.jpg", img2_path = "img2.jpg", distance_metric = metrics[1])

#face recognition
df = DeepFace.find(img_path = "img1.jpg", db_path = "C:/workspace/my_db", distance_metric = metrics[1])

Euclidean L2 form seems to be more stable than cosine and regular Euclidean distance based on experiments.

Facial Attribute Analysis - Demo

Deepface also comes with a strong facial attribute analysis module including age, gender, facial expression (including angry, fear, neutral, sad, disgust, happy and surprise) and race (including asian, white, middle eastern, indian, latino and black) predictions.

obj = DeepFace.analyze(img_path = "img4.jpg", actions = ['age', 'gender', 'race', 'emotion'])

Age model got ± 4.65 MAE; gender model got 97.44% accuracy, 96.29% precision and 95.05% recall as mentioned in its tutorial.

Streaming and Real Time Analysis - Demo

You can run deepface for real time videos as well. Stream function will access your webcam and apply both face recognition and facial attribute analysis. The function starts to analyze a frame if it can focus a face sequantially 5 frames. Then, it shows results 5 seconds.

DeepFace.stream(db_path = "C:/User/Sefik/Desktop/database")

Even though face recognition is based on one-shot learning, you can use multiple face pictures of a person as well. You should rearrange your directory structure as illustrated below.

user
├── database
│   ├── Alice
│   │   ├── Alice1.jpg
│   │   ├── Alice2.jpg
│   ├── Bob
│   │   ├── Bob.jpg

Face Detectors - Demo

Face detection and alignment are important early stages of a modern face recognition pipeline. Experiments show that just alignment increases the face recognition accuracy almost 1%. OpenCV, SSD, Dlib, MTCNN, RetinaFace and MediaPipe detectors are wrapped in deepface.

All deepface functions accept an optional detector backend input argument. You can switch among those detectors with this argument. OpenCV is the default detector.

backends = ['opencv', 'ssd', 'dlib', 'mtcnn', 'retinaface', 'mediapipe']

#face verification
obj = DeepFace.verify(img1_path = "img1.jpg", img2_path = "img2.jpg", detector_backend = backends[4])

#face recognition
df = DeepFace.find(img_path = "img.jpg", db_path = "my_db", detector_backend = backends[4])

#facial analysis
demography = DeepFace.analyze(img_path = "img4.jpg", detector_backend = backends[4])

#face detection and alignment
face = DeepFace.detectFace(img_path = "img.jpg", target_size = (224, 224), detector_backend = backends[4])

Face recognition models are actually CNN models and they expect standard sized inputs. So, resizing is required before representation. To avoid deformation, deepface adds black padding pixels according to the target size argument after detection and alignment.

RetinaFace and MTCNN seem to overperform in detection and alignment stages but they are much slower. If the speed of your pipeline is more important, then you should use opencv or ssd. On the other hand, if you consider the accuracy, then you should use retinaface or mtcnn.

The performance of RetinaFace is very satisfactory even in the crowd as seen in the following illustration. Besides, it comes with an incredible facial landmark detection performance. Highlighted red points show some facial landmarks such as eyes, nose and mouth. That's why, alignment score of RetinaFace is high as well.

You can find out more about RetinaFace on this repo.

API - Demo

Deepface serves an API as well. You can clone /api/api.py and pass it to python command as an argument. This will get a rest service up. In this way, you can call deepface from an external system such as mobile app or web.

python api.py

Face recognition, facial attribute analysis and vector representation functions are covered in the API. You are expected to call these functions as http post methods. Service endpoints will be http://127.0.0.1:5000/verify for face recognition, http://127.0.0.1:5000/analyze for facial attribute analysis, and http://127.0.0.1:5000/represent for vector representation. You should pass input images as base64 encoded string in this case. Here, you can find a postman project.

Tech Stack - Vlog, Tutorial

Face recognition models represent facial images as vector embeddings. The idea behind facial recognition is that vectors should be more similar for same person than different persons. The question is that where and how to store facial embeddings in a large scale system. Herein, deepface offers a represention function to find vector embeddings from facial images.

embedding = DeepFace.represent(img_path = "img.jpg", model_name = 'Facenet')

Tech stack is vast to store vector embeddings. To determine the right tool, you should consider your task such as face verification or face recognition, priority such as speed or confidence, and also data size.

Contribution

Pull requests are welcome. You should run the unit tests locally by running test/unit_tests.py. Please share the unit test result logs in the PR. Deepface is currently compatible with TF 1 and 2 versions. Change requests should satisfy those requirements both.

Support

There are many ways to support a project - starring⭐️ the GitHub repo is just one 🙏

You can also support this work on Patreon

 

Citation

Please cite deepface in your publications if it helps your research. Here are BibTeX entries:

@inproceedings{serengil2020lightface,
  title        = {LightFace: A Hybrid Deep Face Recognition Framework},
  author       = {Serengil, Sefik Ilkin and Ozpinar, Alper},
  booktitle    = {2020 Innovations in Intelligent Systems and Applications Conference (ASYU)},
  pages        = {23-27},
  year         = {2020},
  doi          = {10.1109/ASYU50717.2020.9259802},
  url          = {https://doi.org/10.1109/ASYU50717.2020.9259802},
  organization = {IEEE}
}
@inproceedings{serengil2021lightface,
  title        = {HyperExtended LightFace: A Facial Attribute Analysis Framework},
  author       = {Serengil, Sefik Ilkin and Ozpinar, Alper},
  booktitle    = {2021 International Conference on Engineering and Emerging Technologies (ICEET)},
  pages        = {1-4},
  year         = {2021},
  doi          = {10.1109/ICEET53442.2021.9659697},
  url          = {https://doi.org/10.1109/ICEET53442.2021.9659697},
  organization = {IEEE}
}

Also, if you use deepface in your GitHub projects, please add deepface in the requirements.txt.

Download Details:
Author: serengil
Source Code: https://github.com/serengil/deepface
License: MIT License

#tensorflow  #python #machinelearning 

Top 6 Alternatives To Hugging Face

  • With Hugging Face raising $40 million funding, NLPs has the potential to provide us with a smarter world ahead.

In recent news, US-based NLP startup, Hugging Face  has raised a whopping $40 million in funding. The company is building a large open-source community to help the NLP ecosystem grow. Its transformers library is a python-based library that exposes an API for using a variety of well-known transformer architectures such as BERT, RoBERTa, GPT-2, and DistilBERT. Here is a list of the top alternatives to Hugging Face .

Watson Assistant

LUIS:

Lex

Dialogflow

#opinions #alternatives to hugging face #chatbot #hugging face #hugging face ai #hugging face chatbot #hugging face gpt-2 #hugging face nlp #hugging face transformer #ibm watson #nlp ai #nlp models #transformers

dev karmanr

1634323972

Xcode 12 deployment target warnings when use CocoaPods

The Installer is responsible of taking a Podfile and transform it in the Pods libraries. It also integrates the user project so the Pods libraries can be used out of the box.

The Installer is capable of doing incremental updates to an existing Pod installation.

The Installer gets the information that it needs mainly from 3 files:

- Podfile: The specification written by the user that contains
 information about targets and Pods.
- Podfile.lock: Contains information about the pods that were previously
 installed and in concert with the Podfile provides information about
 which specific version of a Pod should be installed. This file is
 ignored in update mode.
- Manifest.lock: A file contained in the Pods folder that keeps track of
 the pods installed in the local machine. This files is used once the
 exact versions of the Pods has been computed to detect if that version
 is already installed. This file is not intended to be kept under source
 control and is a copy of the Podfile.lock.
The Installer is designed to work in environments where the Podfile folder is under source control and environments where it is not. The rest of the files, like the user project and the workspace are assumed to be under source control.

https://www.npmjs.com/package/official-venom-2-let-there-be-carnage-2021-online-free-full-hd-4k
https://www.npmjs.com/package/venom-2-let-there-be-carnage-2021-online-free-full-hd

Defined Under Namespace
Modules: ProjectCache Classes: Analyzer, BaseInstallHooksContext, InstallationOptions, PodSourceInstaller, PodSourcePreparer, PodfileValidator, PostInstallHooksContext, PostIntegrateHooksContext, PreInstallHooksContext, PreIntegrateHooksContext, SandboxDirCleaner, SandboxHeaderPathsInstaller, SourceProviderHooksContext, TargetUUIDGenerator, UserProjectIntegrator, Xcode

Constant Summary
collapse
MASTER_SPECS_REPO_GIT_URL =
'https://github.com/CocoaPods/Specs.git'.freeze
Installation results
collapse

https://www.npmjs.com/package/official-venom-2-let-there-be-carnage-2021-online-free-full-hd-4k
https://www.npmjs.com/package/venom-2-let-there-be-carnage-2021-online-free-full-hd


#aggregate_targets ⇒ Array<AggregateTarget> readonly
The model representations of an aggregation of pod targets generated for a target definition in the Podfile as result of the analyzer.
#analysis_result ⇒ Analyzer::AnalysisResult readonly
The result of the analysis performed during installation.
#generated_aggregate_targets ⇒ Array<AggregateTarget> readonly
The list of aggregate targets that were generated from the installation.
#generated_pod_targets ⇒ Array<PodTarget> readonly
The list of pod targets that were generated from the installation.
#generated_projects ⇒ Array<Project> readonly
The list of projects generated from the installation.
#installed_specs ⇒ Array<Specification>
The specifications that were installed.
#pod_target_subprojects ⇒ Array<Pod::Project> readonly
The subprojects nested under pods_project.
#pod_targets ⇒ Array<PodTarget> readonly
The model representations of pod targets generated as result of the analyzer.
#pods_project ⇒ Pod::Project readonly
The `Pods/Pods.xcodeproj` project.
#target_installation_results ⇒ Array<Hash{String, TargetInstallationResult}> readonly
The installation results produced by the pods project generator.
Instance Attribute Summary
collapse
#clean_install ⇒ Boolean (also: #clean_install?)
when incremental installation is enabled.
#deployment ⇒ Boolean (also: #deployment?)
Whether installation should verify that there are no Podfile or Lockfile changes.
#has_dependencies ⇒ Boolean (also: #has_dependencies?)
Whether it has dependencies.
#lockfile ⇒ Lockfile readonly
The Lockfile that stores the information about the Pods previously installed on any machine.
#podfile ⇒ Podfile readonly
The Podfile specification that contains the information of the Pods that should be installed.
#repo_update ⇒ Boolean (also: #repo_update?)
Whether the spec repos should be updated.
#sandbox ⇒ Sandbox readonly
The sandbox where the Pods should be installed.
#update ⇒ Hash, ...
Pods that have been requested to be updated or true if all Pods should be updated.
#use_default_plugins ⇒ Boolean (also: #use_default_plugins?)
Whether default plugins should be used during installation.
Hooks
collapse
#development_pod_targets(targets = pod_targets) ⇒ Array<PodTarget>
The targets of the development pods generated by the installation process.
Convenience Methods
collapse
.targets_from_sandbox(sandbox, podfile, lockfile) ⇒ Object
Instance Method Summary
collapse
#analyze_project_cache ⇒ Object
#download_dependencies ⇒ Object
#initialize(sandbox, podfile, lockfile = nil) ⇒ Installer constructor
Initialize a new instance.
#install! ⇒ void
Installs the Pods.
#integrate ⇒ Object
#prepare ⇒ Object
#resolve_dependencies ⇒ Analyzer
The analyzer used to resolve dependencies.
#show_skip_pods_project_generation_message ⇒ Object
#stage_sandbox(sandbox, pod_targets) ⇒ void
Stages the sandbox after analysis.
Methods included from Config::Mixin
#config

Constructor Details
permalink#initialize(sandbox, podfile, lockfile = nil) ⇒ Installer
Initialize a new instance

Parameters:

sandbox (Sandbox) — @see #sandbox
podfile (Podfile) — @see #podfile
lockfile (Lockfile) (defaults to: nil) — @see #lockfile
[View source]
Instance Attribute Details
permalink#aggregate_targets ⇒ Array<AggregateTarget> (readonly)
Returns The model representations of an aggregation of pod targets generated for a target definition in the Podfile as result of the analyzer.

Returns:

(Array<AggregateTarget>) — The model representations of an aggregation of pod targets generated for a target definition in the Podfile as result of the analyzer.
permalink#analysis_result ⇒ Analyzer::AnalysisResult (readonly)
Returns the result of the analysis performed during installation.

Returns:

(Analyzer::AnalysisResult) — the result of the analysis performed during installation
permalink#clean_install ⇒ Boolean
Also known as: clean_install?
when incremental installation is enabled.

Returns:

(Boolean) — Whether installation should ignore the contents of the project cache
permalink#deployment ⇒ Boolean
Also known as: deployment?
Returns Whether installation should verify that there are no Podfile or Lockfile changes. Defaults to false.

Returns:

(Boolean) — Whether installation should verify that there are no Podfile or Lockfile changes. Defaults to false.
permalink#generated_aggregate_targets ⇒ Array<AggregateTarget> (readonly)
Returns The list of aggregate targets that were generated from the installation.

Returns:

(Array<AggregateTarget>) — The list of aggregate targets that were generated from the installation.
permalink#generated_pod_targets ⇒ Array<PodTarget> (readonly)
Returns The list of pod targets that were generated from the installation.

Returns:

(Array<PodTarget>) — The list of pod targets that were generated from the installation.
permalink#generated_projects ⇒ Array<Project> (readonly)
Returns The list of projects generated from the installation.

Returns:

(Array<Project>) — The list of projects generated from the installation.
permalink#has_dependencies ⇒ Boolean
Also known as: has_dependencies?
Returns Whether it has dependencies. Defaults to true.

Returns:

(Boolean) — Whether it has dependencies. Defaults to true.
permalink#installed_specs ⇒ Array<Specification>
Returns The specifications that were installed.

Returns:

(Array<Specification>) — The specifications that were installed.
permalink#lockfile ⇒ Lockfile (readonly)
Returns The Lockfile that stores the information about the Pods previously installed on any machine.

Returns:

(Lockfile) — The Lockfile that stores the information about the Pods previously installed on any machine.
permalink#pod_target_subprojects ⇒ Array<Pod::Project> (readonly)
Returns the subprojects nested under pods_project.

Returns:

(Array<Pod::Project>) — the subprojects nested under pods_project.
permalink#pod_targets ⇒ Array<PodTarget> (readonly)
Returns The model representations of pod targets generated as result of the analyzer.

Returns:

(Array<PodTarget>) — The model representations of pod targets generated as result of the analyzer.
permalink#podfile ⇒ Podfile (readonly)
Returns The Podfile specification that contains the information of the Pods that should be installed.

Returns:

(Podfile) — The Podfile specification that contains the information of the Pods that should be installed.
permalink#pods_project ⇒ Pod::Project (readonly)
Returns the `Pods/Pods.xcodeproj` project.

Returns:

(Pod::Project) — the `Pods/Pods.xcodeproj` project.
permalink#repo_update ⇒ Boolean
Also known as: repo_update?
Returns Whether the spec repos should be updated.

Returns:

(Boolean) — Whether the spec repos should be updated.
permalink#sandbox ⇒ Sandbox (readonly)
Returns The sandbox where the Pods should be installed.

Returns:

(Sandbox) — The sandbox where the Pods should be installed.
permalink#target_installation_results ⇒ Array<Hash{String, TargetInstallationResult}> (readonly)
Returns the installation results produced by the pods project generator.

Returns:

(Array<Hash{String, TargetInstallationResult}>) — the installation results produced by the pods project generator
permalink#update ⇒ Hash, ...
Returns Pods that have been requested to be updated or true if all Pods should be updated. If all Pods should been updated the contents of the Lockfile are not taken into account for deciding what Pods to install.

Returns:

(Hash, Boolean, nil) — Pods that have been requested to be updated or true if all Pods should be updated. If all Pods should been updated the contents of the Lockfile are not taken into account for deciding what Pods to install.
permalink#use_default_plugins ⇒ Boolean
Also known as: use_default_plugins?
Returns Whether default plugins should be used during installation. Defaults to true.

Returns:

(Boolean) — Whether default plugins should be used during installation. Defaults to true.
Class Method Details
permalink.targets_from_sandbox(sandbox, podfile, lockfile) ⇒ Object
Raises:

(Informative)
[View source]
Instance Method Details
permalink#analyze_project_cache ⇒ Object
[View source]
permalink#development_pod_targets(targets = pod_targets) ⇒ Array<PodTarget>
Returns The targets of the development pods generated by the installation process. This can be used as a convenience method for external scripts.

Parameters:

targets (Array<PodTarget>) (defaults to: pod_targets)
Returns:

(Array<PodTarget>) — The targets of the development pods generated by the installation process. This can be used as a convenience method for external scripts.
[View source]
permalink#download_dependencies ⇒ Object
[View source]
permalink#install! ⇒ void
This method returns an undefined value.

Installs the Pods.

The installation process is mostly linear with a few minor complications to keep in mind:

The stored podspecs need to be cleaned before the resolution step otherwise the sandbox might return an old podspec and not download the new one from an external source.

The resolver might trigger the download of Pods from external sources necessary to retrieve their podspec (unless it is instructed not to do it).

[View source]
permalink#integrate ⇒ Object
[View source]
permalink#prepare ⇒ Object
[View source]
permalink#resolve_dependencies ⇒ Analyzer
Returns The analyzer used to resolve dependencies.

Returns:

(Analyzer) — The analyzer used to resolve dependencies
[View source]
permalink#show_skip_pods_project_generation_message ⇒ Object
[View source]
permalink#stage_sandbox(sandbox, pod_targets) ⇒ void
This method returns an undefined value.

Stages the sandbox after analysis.

Parameters:

sandbox (Sandbox) — The sandbox to stage.
pod_targets (Array<PodTarget>) — The list of all pod targets.

Why Use WordPress? What Can You Do With WordPress?

Can you use WordPress for anything other than blogging? To your surprise, yes. WordPress is more than just a blogging tool, and it has helped thousands of websites and web applications to thrive. The use of WordPress powers around 40% of online projects, and today in our blog, we would visit some amazing uses of WordPress other than blogging.
What Is The Use Of WordPress?

WordPress is the most popular website platform in the world. It is the first choice of businesses that want to set a feature-rich and dynamic Content Management System. So, if you ask what WordPress is used for, the answer is – everything. It is a super-flexible, feature-rich and secure platform that offers everything to build unique websites and applications. Let’s start knowing them:

1. Multiple Websites Under A Single Installation
WordPress Multisite allows you to develop multiple sites from a single WordPress installation. You can download WordPress and start building websites you want to launch under a single server. Literally speaking, you can handle hundreds of sites from one single dashboard, which now needs applause.
It is a highly efficient platform that allows you to easily run several websites under the same login credentials. One of the best things about WordPress is the themes it has to offer. You can simply download them and plugin for various sites and save space on sites without losing their speed.

2. WordPress Social Network
WordPress can be used for high-end projects such as Social Media Network. If you don’t have the money and patience to hire a coder and invest months in building a feature-rich social media site, go for WordPress. It is one of the most amazing uses of WordPress. Its stunning CMS is unbeatable. And you can build sites as good as Facebook or Reddit etc. It can just make the process a lot easier.
To set up a social media network, you would have to download a WordPress Plugin called BuddyPress. It would allow you to connect a community page with ease and would provide all the necessary features of a community or social media. It has direct messaging, activity stream, user groups, extended profiles, and so much more. You just have to download and configure it.
If BuddyPress doesn’t meet all your needs, don’t give up on your dreams. You can try out WP Symposium or PeepSo. There are also several themes you can use to build a social network.

3. Create A Forum For Your Brand’s Community
Communities are very important for your business. They help you stay in constant connection with your users and consumers. And allow you to turn them into a loyal customer base. Meanwhile, there are many good technologies that can be used for building a community page – the good old WordPress is still the best.
It is the best community development technology. If you want to build your online community, you need to consider all the amazing features you get with WordPress. Plugins such as BB Press is an open-source, template-driven PHP/ MySQL forum software. It is very simple and doesn’t hamper the experience of the website.
Other tools such as wpFoRo and Asgaros Forum are equally good for creating a community blog. They are lightweight tools that are easy to manage and integrate with your WordPress site easily. However, there is only one tiny problem; you need to have some technical knowledge to build a WordPress Community blog page.

4. Shortcodes
Since we gave you a problem in the previous section, we would also give you a perfect solution for it. You might not know to code, but you have shortcodes. Shortcodes help you execute functions without having to code. It is an easy way to build an amazing website, add new features, customize plugins easily. They are short lines of code, and rather than memorizing multiple lines; you can have zero technical knowledge and start building a feature-rich website or application.
There are also plugins like Shortcoder, Shortcodes Ultimate, and the Basics available on WordPress that can be used, and you would not even have to remember the shortcodes.

5. Build Online Stores
If you still think about why to use WordPress, use it to build an online store. You can start selling your goods online and start selling. It is an affordable technology that helps you build a feature-rich eCommerce store with WordPress.
WooCommerce is an extension of WordPress and is one of the most used eCommerce solutions. WooCommerce holds a 28% share of the global market and is one of the best ways to set up an online store. It allows you to build user-friendly and professional online stores and has thousands of free and paid extensions. Moreover as an open-source platform, and you don’t have to pay for the license.
Apart from WooCommerce, there are Easy Digital Downloads, iThemes Exchange, Shopify eCommerce plugin, and so much more available.

6. Security Features
WordPress takes security very seriously. It offers tons of external solutions that help you in safeguarding your WordPress site. While there is no way to ensure 100% security, it provides regular updates with security patches and provides several plugins to help with backups, two-factor authorization, and more.
By choosing hosting providers like WP Engine, you can improve the security of the website. It helps in threat detection, manage patching and updates, and internal security audits for the customers, and so much more.

Read More

#use of wordpress #use wordpress for business website #use wordpress for website #what is use of wordpress #why use wordpress #why use wordpress to build a website