Garry Taylor

Garry Taylor

1573548422

Fake Face Generator Using DCGAN Model

Overview

In the following article, we will define and train a Deep Convolutional Generative Adversarial Network_(DCGAN)_ model on a dataset of faces. The main objective of the model is to get a Generator Network to generate new images of fake human faces that look as realistic as possible.

To do so, we will first try to understand the intuition behind the working of **GAN**s and **DCGAN**s and then combine this knowledge to build a Fake Face Generator Model. By the end of this post, you will be able to generate your fake samples on any given dataset, using the concepts from this article.

Introduction

The following article is divided into two sections:

  • Theory — Understanding the intuition behind the working of _GAN_s and _DCGAN_s.
  • Practical — Implementing Fake Face Generator in Pytorch.

This article would be covering both sections. So let’s begin the journey….

Theory

Intuition Behind Generative Adversarial Networks(GANs)

Fake Face Generator Using DCGAN Model

  • Definition

_GAN_s, in general, can be defined as a generative model that lets us generate a whole image in parallel. Along with several other kinds of generative models, _GAN_s uses a differentiable function represented by a neural network as a Generator Network.

  • Generator Network

The Generator Network takes random noise as input, then runs the noise through the differentiable function_(neural network)_ to transform the noise and reshape it to have a recognizable structure similar to the images in the training dataset. The output of the Generator is determined by the choice of the input random noise. Running the Generator Network over several different random input noises results in different realistic output images.

The end goal of the Generator is to learn a distribution similar to the distribution of the training dataset to sample out realistic images. To be able to do so, the Generator Network needs to be trained. The training process of the GAN_s is very different, compared to other generative models( Most generative models are trained by adjusting the parameters to maximize the probability of Generator to generate realistic samples.For-eg Variational Auto-Encoders(VAE)). GAN_s, on the other hand, uses a second network to train the Generator, called Discriminator Network.

  • Discriminator Network

Discriminator Network is a basic classifier network that outputs the probability of an image to be real. So, during the training process, the Discriminator Network is shown real images from the training set half the time and fake images from Generator another half of the time. The Discriminator target is to assign a probability near 1, for real images and probability near 0, for fake images.

On the other hand, Generator tries the opposite, its target is to generate fake images, for which Discriminator would result in a probability close to 1_(considering them to be real images from the training set)_. As training goes on, the Discriminator will become better at classifying real and fake images. So to fool the Discriminator, Generator will be forced to improve to produce more realistic samples. So we can say that:

GANs can be considered as a two-player(Generator and Discriminator)non-cooperative game, where each player wishes to minimize its cost function.

Difference Between GANs and DCGANs

_DCGAN_s are very similar to _GAN_s but specifically focuses on using deep convolutional networks in place of fully-connected networks used in Vanilla _GAN_s.

Convolutional networks help in finding deep correlation within an image, that is they look for spatial correlation. This means DCGAN would be a better option for image/video data, whereas _GAN_s can be considered as a general idea on which _DCGAN_s and many other architectures (CGAN, CycleGAN, StarGAN and many others) have been developed.

In this article, we are mainly working on image data, this means that DCGAN would be a better option compared to Vanilla GAN. So from now on, we will be mainly focussing on DCGANs.

Some Tips For Training DCGANs

All the training tips can also be applied toVanilla GAN_s_ as well_._

  • Make sure that both Discriminator and Generator have at least one hidden layer. This makes sure both the models have a Universal Approximation Property.

Universal Approximation Propertystates that a feed-forward network with a single hidden layer containing a finite number of hidden units can approximate any probability distribution, given enough hidden units.

  • For the hidden units, many activation functions could work, but Leaky ReLUs are the most popular. Leaky ReLUs make sure that gradient flows through the entire architecture. This is very important for _DCGAN_s because the only way the Generator can learn is to receive a gradient from the Discriminator.
  • One of the most popular activation function for the output of the Generator Network is the Tangent Hyperbolic Activation Function_(Based on the_ Improved Training Techniques For GANs paper).
  • As the Discriminator is a binary classifier, we will be using the Sigmoid Activation Function to get the final probability.

So far, we have talked about the working intuition and some tips and tricks for training _GAN_s/_DCGAN_s. But still, many questions are left unanswered. Some of them are:

Which optimizer to choose? How is the cost function defined? How long does a network need to be trained? and many others, that would be covered in the Practical section.

Practical

The implementation part is broken down into a series of tasks from loading data to defining and training adversarial networks. At the end of this section, you’ll be able to visualize the results of your trained Generator to see how it performs; your generated samples should look fairly like realistic faces with small amounts of noise.
Fake Face Generator Using DCGAN Model

(1)Get The Data

You’ll be using the CelebFaces Attributes Dataset (CelebA) to train your adversarial networks. This data is a more complex dataset compared to the MNIST. So, we need to define a deeper network_(DCGAN)_ to generate good results. I would suggest you use a GPU for training purposes.

(2)Preparing Data

As the main objective of this article is building a DCGAN model, so instead of doing the preprocessing ourselves, we will be using a pre-processed dataset. You can download the smaller subset of the CelebA dataset from here. And if you are interested in doing the pre-processing, do the following:

  • Crop images to remove the part that doesn’t include the face.
  • Resize them into 64x64x3 NumPy images.

Now, we will create a DataLoader to access the images in batches.

def get_dataloader(batch_size, image_size, data_dir='train/'):
    """
    Batch the neural network data using DataLoader
    :param batch_size: The size of each batch; the number of images in a batch
    :param img_size: The square size of the image data (x, y)
    :param data_dir: Directory where image data is located
    :return: DataLoader with batched data
    """
    transform = transforms.Compose([transforms.Resize(image_size),transforms.CenterCrop(image_size),transforms.ToTensor()])
  
    dataset = datasets.ImageFolder(data_dir,transform = transform)
    
    dataloader = torch.utils.data.DataLoader(dataset = dataset,batch_size = batch_size,shuffle = True)
    return dataloader# Define function hyperparameters
batch_size = 256
img_size = 32# Call your function and get a dataloader
celeba_train_loader = get_dataloader(batch_size, img_size)

DataLoader hyperparameters:

  • You can decide on any reasonable batch_size parameter.
  • However, your image_size must be 32. Resizing the data to a smaller size will make for faster training, while still creating convincing images of faces.

Next, we will write some code to get a visual representation of the dataset.

def imshow(img):
    npimg = img.numpy()
    plt.imshow(np.transpose(npimg, (1, 2, 0)))# obtain one batch of training images
dataiter = iter(celeba_train_loader)
images, _ = dataiter.next() # _ for no labels# plot the images in the batch, along with the corresponding labels
fig = plt.figure(figsize=(20, 4))
plot_size=20
for idx in np.arange(plot_size):
    ax = fig.add_subplot(2, plot_size/2, idx+1, xticks=[], yticks=[])
    imshow(images[idx])

Keep in mind to convert the Tensor images into a NumPy type and transpose the dimensions to correctly display an image based on the above code_(In Dataloader we transformed the images to Tensor)_. Run this piece of code to get a visualization of the dataset.

Fake Face Generator Using DCGAN Model

Now before beginning with the next section_(Defining Model),_ we will write a function to scale the image data to a pixel range of -1 to 1 which we will use while training. The reason behind doing so is that the output of a tanh activated generator will contain pixel values in a range from -1 to 1, and so, we need to rescale our training images to a range of -1 to 1_(right now, they are in the range 0–1)_.

def scale(x, feature_range=(-1, 1)):
    ''' Scale takes in an image x and returns that image, scaled
       with a feature_range of pixel values from -1 to 1. 
       This function assumes that the input x is already scaled from 0-1.'''
    # assume x is scaled to (0, 1)
    # scale to feature_range and return scaled x
    min, max = feature_range
    x = x*(max-min) + min
    return x

(3)Defining Model

A GAN is comprised of two adversarial networks, a discriminator and a generator. So in this section, we will define architectures for both of them.

Discriminator

This is a convolutional classifier, only without any MaxpPooling layers. Here is the code for the Discriminator Network.

def conv(input_c,output,kernel_size,stride = 2,padding  = 1, batch_norm = True):
    layers =[]
    con = nn.Conv2d(input_c,output,kernel_size,stride,padding,bias = False)
    layers.append(con)
    
    if batch_norm:
        layers.append(nn.BatchNorm2d(output))
    
    return nn.Sequential(*layers)class Discriminator(nn.Module):def __init__(self, conv_dim):
        """
        Initialize the Discriminator Module
        :param conv_dim: The depth of the first convolutional layer
        """
        #complete init functionsuper(Discriminator, self).__init__()
        self.conv_dim = conv_dim
        self.layer_1 = conv(3,conv_dim,4,batch_norm = False) #16
        self.layer_2 = conv(conv_dim,conv_dim*2,4) #8
        self.layer_3 = conv(conv_dim*2,conv_dim*4,4) #4
        self.fc = nn.Linear(conv_dim*4*4*4,1)def forward(self, x):
        """
        Forward propagation of the neural network
        :param x: The input to the neural network     
        :return: Discriminator logits; the output of the neural network
        """
        # define feedforward behavior
        x = F.leaky_relu(self.layer_1(x))
        x = F.leaky_relu(self.layer_2(x))
        x = F.leaky_relu(self.layer_3(x))
        x = x.view(-1,self.conv_dim*4*4*4)
        x = self.fc(x)
        return x

Explanation

  • The following architecture consists of three convolutional layers and a final fully connected layer, which output a single logit. This logit defines whether the image is real or not.
  • Each convolution layer, except the first one, is followed by a Batch Normalization(defined in conv helper function).
  • For the hidden units, we have used the leaky ReLU activation function as discussed in the theory section.
  • After each convolution layer, the height and width become half. For-eg After the first convolution 32X32 images will be resized into 16X16 and so on.

Output dimension can be calculated using the following formula:
Fake Face Generator Using DCGAN Model

where O is the output height/length, W is the input height/length, K is the filter size, P is the padding, and S is the stride.

  • The number of feature maps after each convolution is based on the parameter conv_dim(In my implementation conv_dim = 64).

In this model definition, we haven’t applied the Sigmoid activation function on the final output logit. This is because of the choice of our loss function. Here instead of using the normal BCE(Binary Cross-Entropy Loss), we will be using BCEWithLogitLoss, which is considered as a numerically stable version of BCE. BCEWithLogitLoss is defined such that it first applies Sigmoid activation function on the logit and then calculates the loss, unlike BCE. You can read more about these loss functions here.

Generator

The generator should upsample an input and generate a new image of the same size as our training data 32X32X3. For doing so we will be using transpose convolutional layers. Here is the code for the Generator Network.

def deconv(input_c,output,kernel_size,stride = 2, padding =1, batch_norm = True):
    layers = []
    decon = nn.ConvTranspose2d(input_c,output,kernel_size,stride,padding,bias = False)
    layers.append(decon)
    
    if batch_norm:
        layers.append(nn.BatchNorm2d(output))
    return nn.Sequential(*layers)class Generator(nn.Module):
    
    def __init__(self, z_size, conv_dim):
        """
        Initialize the Generator Module
        :param z_size: The length of the input latent vector, z
        :param conv_dim: The depth of the inputs to the *last* transpose convolutional layer
        """
        super(Generator, self).__init__()
        # complete init function
        self.conv_dim = conv_dim
        self.fc = nn.Linear(z_size,conv_dim*8*2*2)
        self.layer_1 = deconv(conv_dim*8,conv_dim*4,4) #4
        self.layer_2 = deconv(conv_dim*4,conv_dim*2,4) #8
        self.layer_3 = deconv(conv_dim*2,conv_dim,4) #16
        self.layer_4 = deconv(conv_dim,3,4,batch_norm = False) #32
        
        
    def forward(self, x):
        """
        Forward propagation of the neural network
        :param x: The input to the neural network     
        :return: A 32x32x3 Tensor image as output
        """
        # define feedforward behavior
        x = self.fc(x)
        x = x.view(-1,self.conv_dim*8,2,2) #(batch_size,depth,width,height)
        x = F.relu(self.layer_1(x))
        x = F.relu(self.layer_2(x))
        x = F.relu(self.layer_3(x))
        x = torch.tanh(self.layer_4(x))
        return x

Explanation

  • The following architecture consists of a fully connected layer followed by four transpose convolution layers. This architecture is defined such that the output after the fourth transpose convolution layer results in an image of dimension 32X32X3_(size of an image from training dataset)._
  • The inputs to the generator are vectors of some length z_size(z_size is the noise vector).
  • Each transpose convolution layer, except the last one, is followed by a Batch Normalization(defined in deconv helper function).
  • For the hidden units, we have used the ReLU activation function.
  • After each transpose convolution layer, the height and width become double. For-eg After the first transpose convolution 2X2 images will be resized into 4X4 and so on.

Can be calculated using the following formula:

_# Padding==Same: H = H1 * stride_

_# Padding==Valid H = (H1-1) * stride + HF_

where H = output size, H1 = input size, HF = filter size.

  • The number of feature maps after each transpose convolution is based on the parameter conv_dim(In my implementation conv_dim = 64).

(4)Initialize The Weights Of Your Networks

To help the models converge, I initialized the weights of the convolutional and linear layers in the model based on the original DCGAN paper, which says: All weights are initialized from a zero-centered Normal distribution with a standard deviation of 0.02.

def weights_init_normal(m):
    """
    Applies initial weights to certain layers in a model .
    The weights are taken from a normal distribution 
    with mean = 0, std dev = 0.02.
    :param m: A module or layer in a network    
    """
    # classname will be something like:
    # `Conv`, `BatchNorm2d`, `Linear`, etc.
    classname = m.__class__.__name__
    
    if hasattr(m,'weight') and (classname.find('Conv') != -1 or classname.find('Linear') != -1):
        
        m.weight.data.normal_(0.0,0.02)
    
        if hasattr(m,'bias') and m.bias is not None:
            m.bias.data.zero_()
  • This would initialize the weights to a normal distribution, centered around 0, with a standard deviation of 0.02.
  • The bias terms, if they exist, maybe left alone or set to 0.

(5)Build Complete Network

Define your models’ hyperparameters and instantiate the discriminator and generator from the classes defined in the Defining Model section. Here is the code for that.

def build_network(d_conv_dim, g_conv_dim, z_size):
    # define discriminator and generator
    D = Discriminator(d_conv_dim)
    G = Generator(z_size=z_size, conv_dim=g_conv_dim)# initialize model weights
    D.apply(weights_init_normal)
    G.apply(weights_init_normal)print(D)
    print()
    print(G)
    
    return D, G
   
# Define model hyperparams
d_conv_dim = 64
g_conv_dim = 64
z_size = 100D, G = build_network(d_conv_dim, g_conv_dim, z_size)

When you run the above code you get the following output. It also describes the model architecture for the Discriminator and Generator models.
Fake Face Generator Using DCGAN Model

(6)Training Process

The training process comprises defining the loss functions, selecting optimizer and finally training the model.

Discriminator And Generator Loss

Discriminator Loss

  • For the discriminator, the total loss is the sum of (d_real_loss + d_fake_loss), where d_real_loss is the loss obtained on images from the training data and d_fake_loss is the loss obtained on images generated from Generator Network. For-eg

z — Noise vector

i — Image from the training set

G(z) — Generated image

D(G(z)) — Discriminator output on a generated image

D(i) — Discriminator output on a training dataset image

Loss = real_loss(D(i)) + fake_loss(D(G(z)))

  • Remember that we want the Discriminator to output 1 for real images and 0 for fake images, so we need to set up the losses to reflect that_(Keep this line in mind while reading the code below)_.

Generator Loss

  • The Generator loss will look similar only with flipped labels. The generator’s goal is to get the discriminator to think its generated images are real. For-eg

z — Noise vector

G(z) — Generated Image

D(G(z)) — Discriminator output on a generated image

Loss = real_loss(D(G(z)))

Here is the code for real_loss and fake_loss

def real_loss(D_out):
    '''Calculates how close discriminator outputs are to being real.
       param, D_out: discriminator logits
       return: real loss'''
    batch_size = D_out.size(0)
    labels = torch.ones(batch_size)
    if train_on_gpu:
        labels = labels.cuda()
    criterion = nn.BCEWithLogitsLoss()
    loss = criterion(D_out.squeeze(),labels)
    return lossdef fake_loss(D_out):
    '''Calculates how close discriminator outputs are to being fake.
       param, D_out: discriminator logits
       return: fake loss'''
    batch_size = D_out.size(0)
    labels = torch.zeros(batch_size)
    if train_on_gpu:
        labels = labels.cuda()
    criterion =  nn.BCEWithLogitsLoss()
    loss = criterion(D_out.squeeze(),labels)
    return loss

Optimizers

For _GAN_s we define two optimizers, one for the Generator and another for the Discriminator. The idea is to run them simultaneously to keep improving both the networks. In this implementation, I have used Adam optimizer in both cases. To know more about different optimizers, refer to this link.

# Create optimizers for the discriminator D and generator G
d_optimizer = optim.Adam(D.parameters(),lr = .0002, betas = [0.5,0.999])
g_optimizer = optim.Adam(G.parameters(),lr = .0002, betas = [0.5,0.999])

Learning rate(lr) and betas values are based on the original DCGAN paper.

Training

Training will involve alternating between training the discriminator and the generator. We’ll use the real_loss and fake_loss functions defined earlier, to help us in calculating the Discriminator and Generator losses.

  • You should train the discriminator by alternating on real and fake images
  • Then the generator, which tries to trick the discriminator and should have an opposing loss function

Here is the code for training.

def train(D, G, n_epochs, print_every=50):
    '''Trains adversarial networks for some number of epochs
       param, D: the discriminator network
       param, G: the generator network
       param, n_epochs: number of epochs to train for
       param, print_every: when to print and record the models' losses
       return: D and G losses'''
    
    # move models to GPU
    if train_on_gpu:
        D.cuda()
        G.cuda()# keep track of loss and generated, "fake" samples
    samples = []
    losses = []# Get some fixed data for sampling. These are images that are held
    # constant throughout training, and allow us to inspect the model's performance
    sample_size=16
    fixed_z = np.random.uniform(-1, 1, size=(sample_size, z_size))
    fixed_z = torch.from_numpy(fixed_z).float()
    # move z to GPU if available
    if train_on_gpu:
        fixed_z = fixed_z.cuda()# epoch training loop
    for epoch in range(n_epochs):# batch training loop
        for batch_i, (real_images, _) in enumerate(celeba_train_loader):batch_size = real_images.size(0)
            real_images = scale(real_images)
            if train_on_gpu:
                real_images = real_images.cuda()
          
            # 1. Train the discriminator on real and fake ima.ges
            d_optimizer.zero_grad()
            d_out_real = D(real_images)
            z = np.random.uniform(-1,1,size = (batch_size,z_size))
            z = torch.from_numpy(z).float()
            if train_on_gpu:
                z = z.cuda()
            d_loss = real_loss(d_out_real) + fake_loss(D(G(z)))
            d_loss.backward()
            d_optimizer.step()
            # 2. Train the generator with an adversarial loss
            G.train()
            g_optimizer.zero_grad()
            z = np.random.uniform(-1,1,size = (batch_size,z_size))
            z = torch.from_numpy(z).float()
            if train_on_gpu:
                z = z.cuda()
            g_loss = real_loss(D(G(z)))
            g_loss.backward()
            g_optimizer.step()
            
            # Print some loss stats
            if batch_i % print_every == 0:
                # append discriminator loss and generator loss
                losses.append((d_loss.item(), g_loss.item()))
                # print discriminator and generator loss
                print('Epoch [{:5d}/{:5d}] | d_loss: {:6.4f} | g_loss: {:6.4f}'.format(
                        epoch+1, n_epochs, d_loss.item(), g_loss.item()))## AFTER EACH EPOCH##    
        # this code assumes your generator is named G, feel free to change the name
        # generate and save sample, fake images
        G.eval() # for generating samples
        samples_z = G(fixed_z)
        samples.append(samples_z)
        G.train() # back to training mode# Save training generator samples
    with open('train_samples.pkl', 'wb') as f:
        pkl.dump(samples, f)
    
    # finally return losses
    return losses
    
    
# set number of epochs 
n_epochs = 40# call training function
losses = train(D, G, n_epochs=n_epochs)

The training is performed over 40 epochs using a GPU, that’s why I had to move my models and inputs from CPU to GPU.

(6)Results

  • The following is the plot of the training losses for the Generator and Discriminator recorded after each epoch.
    Fake Face Generator Using DCGAN Model
    The high fluctuation in the Generator training loss is because the input to the Generator Network is a batch of random noise vectors (each of z_size), each sampled from a uniform distribution of (-1,1) to generate new images for each epoch.

In the discriminator plot, we can observe a rise in the training loss (around 50 on the x-axis) followed by a gradual decrease till the end, this is because the Generator has started to generate some realistic image that fooled the Discriminator, leading to increase in error. But slowly as the training progresses, Discriminator becomes better at classifying fake and real images, leading to a gradual decrease in training error.

  • Generated samples after 40 epochs.
    Fake Face Generator Using DCGAN Model

Our model was able to generate new images of fake human faces that look as realistic as possible. We can also observe that all images are lighter in shade, even the brown faces are bit lighter. This is because the CelebA dataset is biased; it consists of “celebrity” faces that are mostly white. That being said, DCGAN successfully generates near-real images from mere noise.

Fake Face Generator Using DCGAN Model

#machine-learning #ai #data-science #deep-learning

What is GEEK

Buddha Community

Fake Face Generator Using DCGAN Model

Top 6 Alternatives To Hugging Face

  • With Hugging Face raising $40 million funding, NLPs has the potential to provide us with a smarter world ahead.

In recent news, US-based NLP startup, Hugging Face  has raised a whopping $40 million in funding. The company is building a large open-source community to help the NLP ecosystem grow. Its transformers library is a python-based library that exposes an API for using a variety of well-known transformer architectures such as BERT, RoBERTa, GPT-2, and DistilBERT. Here is a list of the top alternatives to Hugging Face .

Watson Assistant

LUIS:

Lex

Dialogflow

#opinions #alternatives to hugging face #chatbot #hugging face #hugging face ai #hugging face chatbot #hugging face gpt-2 #hugging face nlp #hugging face transformer #ibm watson #nlp ai #nlp models #transformers

Why Use WordPress? What Can You Do With WordPress?

Can you use WordPress for anything other than blogging? To your surprise, yes. WordPress is more than just a blogging tool, and it has helped thousands of websites and web applications to thrive. The use of WordPress powers around 40% of online projects, and today in our blog, we would visit some amazing uses of WordPress other than blogging.
What Is The Use Of WordPress?

WordPress is the most popular website platform in the world. It is the first choice of businesses that want to set a feature-rich and dynamic Content Management System. So, if you ask what WordPress is used for, the answer is – everything. It is a super-flexible, feature-rich and secure platform that offers everything to build unique websites and applications. Let’s start knowing them:

1. Multiple Websites Under A Single Installation
WordPress Multisite allows you to develop multiple sites from a single WordPress installation. You can download WordPress and start building websites you want to launch under a single server. Literally speaking, you can handle hundreds of sites from one single dashboard, which now needs applause.
It is a highly efficient platform that allows you to easily run several websites under the same login credentials. One of the best things about WordPress is the themes it has to offer. You can simply download them and plugin for various sites and save space on sites without losing their speed.

2. WordPress Social Network
WordPress can be used for high-end projects such as Social Media Network. If you don’t have the money and patience to hire a coder and invest months in building a feature-rich social media site, go for WordPress. It is one of the most amazing uses of WordPress. Its stunning CMS is unbeatable. And you can build sites as good as Facebook or Reddit etc. It can just make the process a lot easier.
To set up a social media network, you would have to download a WordPress Plugin called BuddyPress. It would allow you to connect a community page with ease and would provide all the necessary features of a community or social media. It has direct messaging, activity stream, user groups, extended profiles, and so much more. You just have to download and configure it.
If BuddyPress doesn’t meet all your needs, don’t give up on your dreams. You can try out WP Symposium or PeepSo. There are also several themes you can use to build a social network.

3. Create A Forum For Your Brand’s Community
Communities are very important for your business. They help you stay in constant connection with your users and consumers. And allow you to turn them into a loyal customer base. Meanwhile, there are many good technologies that can be used for building a community page – the good old WordPress is still the best.
It is the best community development technology. If you want to build your online community, you need to consider all the amazing features you get with WordPress. Plugins such as BB Press is an open-source, template-driven PHP/ MySQL forum software. It is very simple and doesn’t hamper the experience of the website.
Other tools such as wpFoRo and Asgaros Forum are equally good for creating a community blog. They are lightweight tools that are easy to manage and integrate with your WordPress site easily. However, there is only one tiny problem; you need to have some technical knowledge to build a WordPress Community blog page.

4. Shortcodes
Since we gave you a problem in the previous section, we would also give you a perfect solution for it. You might not know to code, but you have shortcodes. Shortcodes help you execute functions without having to code. It is an easy way to build an amazing website, add new features, customize plugins easily. They are short lines of code, and rather than memorizing multiple lines; you can have zero technical knowledge and start building a feature-rich website or application.
There are also plugins like Shortcoder, Shortcodes Ultimate, and the Basics available on WordPress that can be used, and you would not even have to remember the shortcodes.

5. Build Online Stores
If you still think about why to use WordPress, use it to build an online store. You can start selling your goods online and start selling. It is an affordable technology that helps you build a feature-rich eCommerce store with WordPress.
WooCommerce is an extension of WordPress and is one of the most used eCommerce solutions. WooCommerce holds a 28% share of the global market and is one of the best ways to set up an online store. It allows you to build user-friendly and professional online stores and has thousands of free and paid extensions. Moreover as an open-source platform, and you don’t have to pay for the license.
Apart from WooCommerce, there are Easy Digital Downloads, iThemes Exchange, Shopify eCommerce plugin, and so much more available.

6. Security Features
WordPress takes security very seriously. It offers tons of external solutions that help you in safeguarding your WordPress site. While there is no way to ensure 100% security, it provides regular updates with security patches and provides several plugins to help with backups, two-factor authorization, and more.
By choosing hosting providers like WP Engine, you can improve the security of the website. It helps in threat detection, manage patching and updates, and internal security audits for the customers, and so much more.

Read More

#use of wordpress #use wordpress for business website #use wordpress for website #what is use of wordpress #why use wordpress #why use wordpress to build a website

amelia jones

1591340335

How To Take Help Of Referencing Generator

APA Referencing Generator

Many students use APA style as the key citation style in their assignment in university or college. Although, many people find it quite difficult to write the reference of the source. You ought to miss the names and dates of authors. Hence, APA referencing generator is important for reducing the burden of students. They can now feel quite easy to do the assignments on time.

The functioning of APA referencing generator

If you are struggling hard to write the APA referencing then you can take the help of APA referencing generator. It will create an excellent list. You are required to enter the information about the source. Just ensure that the text is credible and original. If you will copy references then it is a copyright violation.

You can use a referencing generator in just a click. It will generate the right references for all the sources. You are required to organize in alphabetical order. The generator will make sure that you will get good grades.

How to use APA referencing generator?

Select what is required to be cited such as journal, book, film, and others. You can choose the type of required citations list and enter all the required fields. The fields are dates, author name, title, editor name, and editions, name of publishers, chapter number, page numbers, and title of journals. You can click for reference to be generated and you will get the desired result.

Chicago Referencing Generator

Do you require the citation style? You can rely on Chicago Referencing Generator and will ensure that you will get the right citation in just a click. The generator is created to provide solutions to students to cite their research paper in Chicago style. It has proved to be the quickest and best citation generator on the market. The generator helps to sort the homework issues in few seconds. It also saves a lot of time and energy.

This tool helps researchers, professional writers, and students to manage and generate text citation essays. It will help to write Chicago style in a fast and easy way. It also provides details and directions for formatting and cites resources.

So, you must stop wasting the time and can go for Chicago Referencing Generator or APA referencing generator. These citation generators will help to solve the problem of citation issues. You can easily create citations by using endnotes and footnotes.

So, you can generate bibliographies, references, in-text citations, and title pages. These are fully automatic referencing style. You are just required to enter certain details about the citation and you will get the citation in the proper and required format.

So, if you are feeling any problem in doing assignment then you can take the help of assignment help.
If you require help for Assignment then livewebtutors is the right place for you. If you see our prices, you will observe that they are actually very affordable. Also, you can always expect a discount. Our team is capable and versatile enough to offer you exactly what you need, the best services for the prices you can afford.

read more:- Are you struggling to write a bibliography? Use Harvard referencing generator

#apa referencing generator #harvard referencing generator #chicago referencing generator #mla referencing generator #deakin referencing generator #oxford referencing generator

How To Create User-Generated Content? [A Simple Guide To Grow Your Brand]

This is image title

In this digital world, online businesses aspire to catch the attention of users in a modern and smarter way. To achieve it, they need to traverse through new approaches. Here comes to spotlight is the user-generated content or UGC.

What is user-generated content?
“ It is the content by users for users.”

Generally, the UGC is the unbiased content created and published by the brand users, social media followers, fans, and influencers that highlight their experiences with the products or services. User-generated content has superseded other marketing trends and fallen into the advertising feeds of brands. Today, more than 86 percent of companies use user-generated content as part of their marketing strategy.

In this article, we have explained the ten best ideas to create wonderful user-generated content for your brand. Let’s start without any further ado.

  1. Content From Social Media Platforms
    In the year 2020, there are 3.81 million people actively using social media around the globe. That is the reason social media content matters. Whenever users look at the content on social media that is posted by an individual, then they may be influenced by their content. Perhaps, it can be used to gain more customers or followers on your social media platforms.

This is image title

Generally, social media platforms help the brand to generate content for your users. Any user content that promotes your brand on the social media platform is the user-generated content for your business. When users create and share content on social media, they get 28% higher engagement than a standard company post.

Furthermore, you can embed your social media feed on your website also. you can use the Social Stream Designer WordPress plugin that will integrate various social media feeds from different social media platforms like Facebook, Twitter, Instagram, and many more. With this plugin, you can create a responsive wall on your WordPress website or blog in a few minutes. In addition to this, the plugin also provides more than 40 customization options to make your social stream feeds more attractive.

  1. Consumer Survey
    The customer survey provides powerful insights you need to make a better decision for your business. Moreover, it is great user-generated content that is useful for identifying unhappy consumers and those who like your product or service.

In general, surveys can be used to figure out attitudes, reactions, to evaluate customer satisfaction, estimate their opinions about different problems. Another benefit of customer surveys is that collecting outcomes can be quick. Within a few minutes, you can design and load a customer feedback survey and send it to your customers for their response. From the customer survey data, you can find your strengths, weaknesses, and get the right way to improve them to gain more customers.

  1. Run Contests
    A contest is a wonderful way to increase awareness about a product or service. Contest not just helps you to enhance the volume of user-generated content submissions, but they also help increase their quality. However, when you create a contest, it is important to keep things as simple as possible.

Additionally, it is the best way to convert your brand leads to valuable customers. The key to running a successful contest is to make sure that the reward is fair enough to motivate your participation. If the product is relevant to your participant, then chances are they were looking for it in the first place, and giving it to them for free just made you move forward ahead of your competitors. They will most likely purchase more if your product or service satisfies them.

Furthermore, running contests also improve the customer-brand relationship and allows more people to participate in it. It will drive a real result for your online business. If your WordPress website has Google Analytics, then track contest page visits, referral traffic, other website traffic, and many more.

  1. Review And Testimonials
    Customer reviews are a popular user-generated content strategy. One research found that around 68% of customers must see at least four reviews before trusting a brand. And, approximately 40 percent of consumers will stop using a business after they read negative reviews.

The business reviews help your consumers to make a buying decision without any hurdle. While you may decide to remove all the negative reviews about your business, those are still valuable user-generated content that provides honest opinions from real users. Customer feedback can help you with what needs to be improved with your products or services. This thing is not only beneficial to the next customer but your business as a whole.

This is image title

Reviews are powerful as the platform they are built upon. That is the reason it is important to gather reviews from third-party review websites like Google review, Facebook review, and many more, or direct reviews on a website. It is the most vital form of feedback that can help brands grow globally and motivate audience interactions.

However, you can also invite your customers to share their unique or successful testimonials. It is a great way to display your products while inspiring others to purchase from your website.

  1. Video Content
    A great video is a video that is enjoyed by visitors. These different types of videos, such as 360-degree product videos, product demo videos, animated videos, and corporate videos. The Facebook study has demonstrated that users spend 3x more time watching live videos than normal videos. With the live video, you can get more user-created content.

Moreover, Instagram videos create around 3x more comments rather than Instagram photo posts. Instagram videos generally include short videos posted by real customers on Instagram with the tag of a particular brand. Brands can repost the stories as user-generated content to engage more audiences and create valid promotions on social media.

Similarly, imagine you are browsing a YouTube channel, and you look at a brand being supported by some authentic customers through a small video. So, it will catch your attention. With the videos, they can tell you about the branded products, especially the unboxing videos displaying all the inside products and how well it works for them. That type of video is enough to create a sense of desire in the consumers.

Continue Reading

#how to get more user generated content #importance of user generated content #user generated content #user generated content advantages #user generated content best practices #user generated content pros and cons

sendy patel

sendy patel

1617086469

Online secure password generator

Create a secure password using our generator tool. Help prevent a security threat by getting a strong password today on hackthestuff.com.

#password #strong password generator #password generator #password generator tool #random generator tool #google generator tool