Royce  Reinger

Royce Reinger

1672492620

Have Fun with Machine Learning: A Guide for Beginners

Have Fun with Machine Learning: A Guide for Beginners

Preface

This is a hands-on guide to machine learning for programmers with no background in AI. Using a neural network doesn’t require a PhD, and you don’t need to be the person who makes the next breakthrough in AI in order to use what exists today. What we have now is already breathtaking, and highly usable. I believe that more of us need to play with this stuff like we would any other open source technology, instead of treating it like a research topic.

In this guide our goal will be to write a program that uses machine learning to predict, with a high degree of certainty, whether the images in data/untrained-samples are of dolphins or seahorses using only the images themselves, and without having seen them before. Here are two example images we'll use:

A dolphin A seahorse

To do that we’re going to train and use a Convolutional Neural Network (CNN). We’re going to approach this from the point of view of a practitioner vs. from first principles. There is so much excitement about AI right now, but much of what’s being written feels like being taught to do tricks on your bike by a physics professor at a chalkboard instead of your friends in the park.

I’ve decided to write this on Github vs. as a blog post because I’m sure that some of what I’ve written below is misleading, naive, or just plain wrong. I’m still learning myself, and I’ve found the lack of solid beginner documentation an obstacle. If you see me making a mistake or missing important details, please send a pull request.

With all of that out the way, let me show you how to do some tricks on your bike!

Overview

Here’s what we’re going to explore:

  • Setup and use existing, open source machine learning technologies, specifically Caffe and DIGITS
  • Create a dataset of images
  • Train a neural network from scratch
  • Test our neural network on images it has never seen before
  • Improve our neural network’s accuracy by fine tuning existing neural networks (AlexNet and GoogLeNet)
  • Deploy and use our neural network

This guide won’t teach you how neural networks are designed, cover much theory, or use a single mathematical expression. I don’t pretend to understand most of what I’m going to show you. Instead, we’re going to use existing things in interesting ways to solve a hard problem.

Q: "I know you said we won’t talk about the theory of neural networks, but I’m feeling like I’d at least like an overview before we get going. Where should I start?"

There are literally hundreds of introductions to this, from short posts to full online courses. Depending on how you like to learn, here are three options for a good starting point:

  • This fantastic blog post by J Alammar, which introduces the concepts of neural networks using intuitive examples.
  • Similarly, this video introduction by Brandon Rohrer is a really good intro to Convolutional Neural Networks like we'll be using
  • If you’d rather have a bit more theory, I’d recommend this online book by Michael Nielsen.

Setup

Installing the software we'll use (Caffe and DIGITS) can be frustrating, depending on your platform and OS version. By far the easiest way to do it is using Docker. Below we examine how to do it with Docker, as well as how to do it natively.

Option 1a: Installing Caffe Natively

First, we’re going to be using the Caffe deep learning framework from the Berkely Vision and Learning Center (BSD licensed).

Q: “Wait a minute, why Caffe? Why not use something like TensorFlow, which everyone is talking about these days…”

There are a lot of great choices available, and you should look at all the options. TensorFlow is great, and you should play with it. However, I’m using Caffe for a number of reasons:

  • It’s tailormade for computer vision problems
  • It has support for C++, Python, (with node.js support coming)
  • It’s fast and stable

But the number one reason I’m using Caffe is that you don’t need to write any code to work with it. You can do everything declaratively (Caffe uses structured text files to define the network architecture) and using command-line tools. Also, you can use some nice front-ends for Caffe to make training and validating your network a lot easier. We’ll be using nVidia’s DIGITS tool below for just this purpose.

Caffe can be a bit of work to get installed. There are installation instructions for various platforms, including some prebuilt Docker or AWS configurations.

NOTE: when making my walkthrough, I used the following non-released version of Caffe from their Github repo: https://github.com/BVLC/caffe/commit/5a201dd960840c319cefd9fa9e2a40d2c76ddd73

On a Mac it can be frustrating to get working, with version issues halting your progress at various steps in the build. It took me a couple of days of trial and error. There are a dozen guides I followed, each with slightly different problems. In the end I found this one to be the closest. I’d also recommend this post, which is quite recent and links to many of the same discussions I saw.

Getting Caffe installed is by far the hardest thing we'll do, which is pretty neat, since you’d assume the AI aspects would be harder! Don’t give up if you have issues, it’s worth the pain. If I was doing this again, I’d probably use an Ubuntu VM instead of trying to do it on Mac directly. There's also a Caffe Users group, if you need answers.

Q: “Do I need powerful hardware to train a neural network? What if I don’t have access to fancy GPUs?”

It’s true, deep neural networks require a lot of computing power and energy to train...if you’re training them from scratch and using massive datasets. We aren’t going to do that. The secret is to use a pretrained network that someone else has already invested hundreds of hours of compute time training, and then to fine tune it to your particular dataset. We’ll look at how to do this below, but suffice it to say that everything I’m going to show you, I’m doing on a year old MacBook Pro without a fancy GPU.

As an aside, because I have an integrated Intel graphics card vs. an nVidia GPU, I decided to use the OpenCL Caffe branch, and it’s worked great on my laptop.

When you’re done installing Caffe, you should have, or be able to do all of the following:

  • A directory that contains your built caffe. If you did this in the standard way, there will be a build/ dir which contains everything you need to run caffe, the Python bindings, etc. The parent dir that contains build/ will be your CAFFE_ROOT (we’ll need this later).
  • Running make test && make runtest should pass
  • After installing all the Python deps (doing pip install -r requirements.txt in python/), running make pycaffe && make pytest should pass
  • You should also run make distribute in order to create a distributable version of caffe with all necessary headers, binaries, etc. in distribute/.

On my machine, with Caffe fully built, I’ve got the following basic layout in my CAFFE_ROOT dir:

caffe/
    build/
        python/
        lib/
        tools/
            caffe ← this is our main binary 
    distribute/
        python/
        lib/
        include/
        bin/
        proto/

At this point, we have everything we need to train, test, and program with neural networks. In the next section we’ll add a user-friendly, web-based front end to Caffe called DIGITS, which will make training and testing our networks much easier.

Option 1b: Installing DIGITS Natively

nVidia’s Deep Learning GPU Training System, or DIGITS, is BSD-licensed Python web app for training neural networks. While it’s possible to do everything DIGITS does in Caffe at the command-line, or with code, using DIGITS makes it a lot easier to get started. I also found it more fun, due to the great visualizations, real-time charts, and other graphical features. Since you’re experimenting and trying to learn, I highly recommend beginning with DIGITS.

There are quite a few good docs at https://github.com/NVIDIA/DIGITS/tree/master/docs, including a few Installation, Configuration, and Getting Started pages. I’d recommend reading through everything there before you continue, as I’m not an expert on everything you can do with DIGITS. There's also a public DIGITS User Group if you have questions you need to ask.

There are various ways to install and run DIGITS, from Docker to pre-baked packages on Linux, or you can build it from source. I’m on a Mac, so I built it from source.

NOTE: In my walkthrough I've used the following non-released version of DIGITS from their Github repo: https://github.com/NVIDIA/DIGITS/commit/81be5131821ade454eb47352477015d7c09753d9

Because it’s just a bunch of Python scripts, it was fairly painless to get working. The one thing you need to do is tell DIGITS where your CAFFE_ROOT is by setting an environment variable before starting the server:

export CAFFE_ROOT=/path/to/caffe
./digits-devserver

NOTE: on Mac I had issues with the server scripts assuming my Python binary was called python2, where I only have python2.7. You can symlink it in /usr/bin or modify the DIGITS startup script(s) to use the proper binary on your system.

Once the server is started, you can do everything else via your web browser at http://localhost:5000, which is what I'll do below.

Option 2: Caffe and DIGITS using Docker

Install Docker, if not already installed, then run the following command in order to pull and run a full Caffe + Digits container. A few things to note:

  • make sure port 8080 isn't allocated by another program. If so, change it to any other port you want.
  • change /path/to/this/repository to the location of this cloned repo, and /data/repo within the container will be bound to this directory. This is useful for accessing the images discussed below.
docker run --name digits -d -p 8080:5000 -v /path/to/this/repository:/data/repo kaixhin/digits

Now that we have our container running you can open up your web browser and open http://localhost:8080. Everything in the repository is now in the container directory /data/repo. That's it. You've now got Caffe and DIGITS working.

If you need shell access, use the following command:

docker exec -it digits /bin/bash

Training a Neural Network

Training a neural network involves a few steps:

  1. Assemble and prepare a dataset of categorized images
  2. Define the network’s architecture
  3. Train and Validate this network using the prepared dataset

We’re going to do this 3 different ways, in order to show the difference between starting from scratch and using a pretrained network, and also to show how to work with two popular pretrained networks (AlexNet, GoogLeNet) that are commonly used with Caffe and DIGITs.

For our training attempts, we’ll use a small dataset of Dolphins and Seahorses. I’ve put the images I used in data/dolphins-and-seahorses. You need at least 2 categories, but could have many more (some of the networks we’ll use were trained on 1000+ image categories). Our goal is to be able to give an image to our network and have it tell us whether it’s a Dolphin or a Seahorse.

Prepare the Dataset

The easiest way to begin is to divide your images into a categorized directory layout:

dolphins-and-seahorses/
    dolphin/
        image_0001.jpg
        image_0002.jpg
        image_0003.jpg
        ...
    seahorse/
        image_0001.jpg
        image_0002.jpg
        image_0003.jpg
        ...

Here each directory is a category we want to classify, and each image within that category dir an example we’ll use for training and validation.

Q: “Do my images have to be the same size? What about the filenames, do they matter?”

No to both. The images sizes will be normalized before we feed them into the network. We’ll eventually want colour images of 256 x 256 pixels, but DIGITS will crop or squash (we'll squash) our images automatically in a moment. The filenames are irrelevant--it’s only important which category they are contained within.

Q: “Can I do more complex segmentation of my categories?”

Yes. See https://github.com/NVIDIA/DIGITS/blob/digits-4.0/docs/ImageFolderFormat.md.

We want to use these images on disk to create a New Dataset, and specifically, a Classification Dataset.

Create New Dataset

We’ll use the defaults DIGITS gives us, and point Training Images at the path to our data/dolphins-and-seahorses folder. DIGITS will use the categories (dolphin and seahorse) to create a database of squashed, 256 x 256 Training (75%) and Testing (25%) images.

Give your Dataset a name,dolphins-and-seahorses, and click Create.

New Image Classification Dataset

This will create our dataset, which took only 4s on my laptop. In the end I have 92 Training images (49 dolphin, 43 seahorse) in 2 categories, with 30 Validation images (16 dolphin, 14 seahorse). It’s a really small dataset, but perfect for our experimentation and learning purposes, because it won’t take forever to train and validate a network that uses it.

You can Explore the db if you want to see the images after they have been squashed.

Explore the db

Training: Attempt 1, from Scratch

Back in the DIGITS Home screen, we need to create a new Classification Model:

Create Classification Model

We’ll start by training a model that uses our dolphins-and-seahorses dataset, and the default settings DIGITS provides. For our first network, we’ll choose to use one of the standard network architectures, AlexNet (pdf). AlexNet’s design won a major computer vision competition called ImageNet in 2012. The competition required categorizing 1000+ image categories across 1.2 million images.

New Classification Model 1

Caffe uses structured text files to define network architectures. These text files are based on Google’s Protocol Buffers. You can read the full schema Caffe uses. For the most part we’re not going to work with these, but it’s good to be aware of their existence, since we’ll have to modify them in later steps. The AlexNet prototxt file looks like this, for example: https://github.com/BVLC/caffe/blob/master/models/bvlc_alexnet/train_val.prototxt.

We’ll train our network for 30 epochs, which means that it will learn (with our training images) then test itself (using our validation images), and adjust the network’s weights depending on how well it’s doing, and repeat this process 30 times. Each time it completes a cycle we’ll get info about Accuracy (0% to 100%, where higher is better) and what our Loss is (the sum of all the mistakes that were made, where lower is better). Ideally we want a network that is able to predict with high accuracy, and with few errors (small loss).

NOTE: some people have reported hitting errors in DIGITS doing this training run. For many, the problem related to available memory (the process needs a lot of memory to work). If you're using Docker, you might want to try increasing the amount of memory available to DIGITS (in Docker, preferences -> advanced -> memory).

Initially, our network’s accuracy is a bit below 50%. This makes sense, because at first it’s just “guessing” between two categories using randomly assigned weights. Over time it’s able to achieve 87.5% accuracy, with a loss of 0.37. The entire 30 epoch run took me just under 6 minutes.

Model Attempt 1

We can test our model using an image we upload or a URL to an image on the web. Let’s test it on a few examples that weren’t in our training/validation dataset:

Model 1 Classify 1

Model 1 Classify 2

It almost seems perfect, until we try another:

Model 1 Classify 3

Here it falls down completely, and confuses a seahorse for a dolphin, and worse, does so with a high degree of confidence.

The reality is that our dataset is too small to be useful for training a really good neural network. We really need 10s or 100s of thousands of images, and with that, a lot of computing power to process everything.

Training: Attempt 2, Fine Tuning AlexNet

How Fine Tuning works

Designing a neural network from scratch, collecting data sufficient to train it (e.g., millions of images), and accessing GPUs for weeks to complete the training is beyond the reach of most of us. To make it practical for smaller amounts of data to be used, we employ a technique called Transfer Learning, or Fine Tuning. Fine tuning takes advantage of the layout of deep neural networks, and uses pretrained networks to do the hard work of initial object detection.

Imagine using a neural network to be like looking at something far away with a pair of binoculars. You first put the binoculars to your eyes, and everything is blurry. As you adjust the focus, you start to see colours, lines, shapes, and eventually you are able to pick out the shape of a bird, then with some more adjustment you can identify the species of bird.

In a multi-layered network, the initial layers extract features (e.g., edges), with later layers using these features to detect shapes (e.g., a wheel, an eye), which are then feed into final classification layers that detect items based on accumulated characteristics from previous layers (e.g., a cat vs. a dog). A network has to be able to go from pixels to circles to eyes to two eyes placed in a particular orientation, and so on up to being able to finally conclude that an image depicts a cat.

What we’d like to do is to specialize an existing, pretrained network for classifying a new set of image classes instead of the ones on which it was initially trained. Because the network already knows how to “see” features in images, we’d like to retrain it to “see” our particular image types. We don’t need to start from scratch with the majority of the layers--we want to transfer the learning already done in these layers to our new classification task. Unlike our previous attempt, which used random weights, we’ll use the existing weights of the final network in our training. However, we’ll throw away the final classification layer(s) and retrain the network with our image dataset, fine tuning it to our image classes.

For this to work, we need a pretrained network that is similar enough to our own data that the learned weights will be useful. Luckily, the networks we’ll use below were trained on millions of natural images from ImageNet, which is useful across a broad range of classification tasks.

This technique has been used to do interesting things like screening for eye diseases from medical imagery, identifying plankton species from microscopic images collected at sea, to categorizing the artistic style of Flickr images.

Doing this perfectly, like all of machine learning, requires you to understand the data and network architecture--you have to be careful with overfitting of the data, might need to fix some of the layers, might need to insert new layers, etc. However, my experience is that it “Just Works” much of the time, and it’s worth you simply doing an experiment to see what you can achieve using our naive approach.

Uploading Pretrained Networks

In our first attempt, we used AlexNet’s architecture, but started with random weights in the network’s layers. What we’d like to do is download and use a version of AlexNet that has already been trained on a massive dataset.

Thankfully we can do exactly this. A snapshot of AlexNet is available for download: https://github.com/BVLC/caffe/tree/master/models/bvlc_alexnet. We need the binary .caffemodel file, which is what contains the trained weights, and it’s available for download at http://dl.caffe.berkeleyvision.org/bvlc_alexnet.caffemodel.

While you’re downloading pretrained models, let’s get one more at the same time. In 2014, Google won the same ImageNet competition with GoogLeNet (codenamed Inception): a 22-layer neural network. A snapshot of GoogLeNet is available for download as well, see https://github.com/BVLC/caffe/tree/master/models/bvlc_googlenet. Again, we’ll need the .caffemodel file with all the pretrained weights, which is available for download at http://dl.caffe.berkeleyvision.org/bvlc_googlenet.caffemodel.

With these .caffemodel files in hand, we can upload them into DIGITs. Go to the Pretrained Models tab in DIGITs home page and choose Upload Pretrained Model:

Load Pretrained Model

For both of these pretrained models, we can use the defaults DIGITs provides (i.e., colour, squashed images of 256 x 256). We just need to provide the Weights (**.caffemodel) and Model Definition (original.prototxt). Click each of those buttons to select a file.

For the model definitions we can use https://github.com/BVLC/caffe/blob/master/models/bvlc_googlenet/train_val.prototxt for GoogLeNet and https://github.com/BVLC/caffe/blob/master/models/bvlc_alexnet/train_val.prototxt for AlexNet. We aren’t going to use the classification labels of these networks, so we’ll skip adding a labels.txt file:

Upload Pretrained Model

Repeat this process for both AlexNet and GoogLeNet, as we’ll use them both in the coming steps.

Q: "Are there other networks that would be good as a basis for fine tuning?"

The Caffe Model Zoo has quite a few other pretrained networks that could be used, see https://github.com/BVLC/caffe/wiki/Model-Zoo.

Fine Tuning AlexNet for Dolphins and Seahorses

Training a network using a pretrained Caffe Model is similar to starting from scratch, though we have to make a few adjustments. First, we’ll adjust the Base Learning Rate to 0.001 from 0.01, since we don’t need to make such large jumps (i.e., we’re fine tuning). We’ll also use a Pretrained Network, and Customize it.

New Image Classification

In the pretrained model’s definition (i.e., prototext), we need to rename all references to the final Fully Connected Layer (where the end result classifications happen). We do this because we want the model to re-learn new categories from our dataset vs. its original training data (i.e., we want to throw away the current final layer). We have to rename the last fully connected layer from “fc8” to something else, “fc9” for example. Finally, we also need to adjust the number of categories from 1000 to 2, by changing num_output to 2.

Here are the changes we need to make:

@@ -332,8 +332,8 @@
 }
 layer {
-  name: "fc8"
+  name: "fc9"
   type: "InnerProduct"
   bottom: "fc7"
-  top: "fc8"
+  top: "fc9"
   param {
     lr_mult: 1
@@ -345,5 +345,5 @@
   }
   inner_product_param {
-    num_output: 1000
+    num_output: 2
     weight_filler {
       type: "gaussian"
@@ -359,5 +359,5 @@
   name: "accuracy"
   type: "Accuracy"
-  bottom: "fc8"
+  bottom: "fc9"
   bottom: "label"
   top: "accuracy"
@@ -367,5 +367,5 @@
   name: "loss"
   type: "SoftmaxWithLoss"
-  bottom: "fc8"
+  bottom: "fc9"
   bottom: "label"
   top: "loss"
@@ -375,5 +375,5 @@
   name: "softmax"
   type: "Softmax"
-  bottom: "fc8"
+  bottom: "fc9"
   top: "softmax"
   include { stage: "deploy" }

I’ve included the fully modified file I’m using in src/alexnet-customized.prototxt.

This time our accuracy starts at ~60% and climbs right away to 87.5%, then to 96% and all the way up to 100%, with the Loss steadily decreasing. After 5 minutes we end up with an accuracy of 100% and a loss of 0.0009.

Model Attempt 2

Testing the same seahorse image our previous network got wrong, we see a complete reversal: 100% seahorse.

Model 2 Classify 1

Even a children’s drawing of a seahorse works:

Model 2 Classify 2

The same goes for a dolphin:

Model 2 Classify 3

Even with images that you think might be hard, like this one that has multiple dolphins close together, and with their bodies mostly underwater, it does the right thing:

Model 2 Classify 4

Training: Attempt 3, Fine Tuning GoogLeNet

Like the previous AlexNet model we used for fine tuning, we can use GoogLeNet as well. Modifying the network is a bit trickier, since you have to redefine three fully connected layers instead of just one.

To fine tune GoogLeNet for our use case, we need to once again create a new Classification Model:

New Classification Model

We rename all references to the three fully connected classification layers, loss1/classifier, loss2/classifier, and loss3/classifier, and redefine the number of categories (num_output: 2). Here are the changes we need to make in order to rename the 3 classifier layers, as well as to change from 1000 to 2 categories:

@@ -917,10 +917,10 @@
   exclude { stage: "deploy" }
 }
 layer {
-  name: "loss1/classifier"
+  name: "loss1a/classifier"
   type: "InnerProduct"
   bottom: "loss1/fc"
-  top: "loss1/classifier"
+  top: "loss1a/classifier"
   param {
     lr_mult: 1
     decay_mult: 1
@@ -930,7 +930,7 @@
     decay_mult: 0
   }
   inner_product_param {
-    num_output: 1000
+    num_output: 2
     weight_filler {
       type: "xavier"
       std: 0.0009765625
@@ -945,7 +945,7 @@
 layer {
   name: "loss1/loss"
   type: "SoftmaxWithLoss"
-  bottom: "loss1/classifier"
+  bottom: "loss1a/classifier"
   bottom: "label"
   top: "loss1/loss"
   loss_weight: 0.3
@@ -954,7 +954,7 @@
 layer {
   name: "loss1/top-1"
   type: "Accuracy"
-  bottom: "loss1/classifier"
+  bottom: "loss1a/classifier"
   bottom: "label"
   top: "loss1/accuracy"
   include { stage: "val" }
@@ -962,7 +962,7 @@
 layer {
   name: "loss1/top-5"
   type: "Accuracy"
-  bottom: "loss1/classifier"
+  bottom: "loss1a/classifier"
   bottom: "label"
   top: "loss1/accuracy-top5"
   include { stage: "val" }
@@ -1705,10 +1705,10 @@
   exclude { stage: "deploy" }
 }
 layer {
-  name: "loss2/classifier"
+  name: "loss2a/classifier"
   type: "InnerProduct"
   bottom: "loss2/fc"
-  top: "loss2/classifier"
+  top: "loss2a/classifier"
   param {
     lr_mult: 1
     decay_mult: 1
@@ -1718,7 +1718,7 @@
     decay_mult: 0
   }
   inner_product_param {
-    num_output: 1000
+    num_output: 2
     weight_filler {
       type: "xavier"
       std: 0.0009765625
@@ -1733,7 +1733,7 @@
 layer {
   name: "loss2/loss"
   type: "SoftmaxWithLoss"
-  bottom: "loss2/classifier"
+  bottom: "loss2a/classifier"
   bottom: "label"
   top: "loss2/loss"
   loss_weight: 0.3
@@ -1742,7 +1742,7 @@
 layer {
   name: "loss2/top-1"
   type: "Accuracy"
-  bottom: "loss2/classifier"
+  bottom: "loss2a/classifier"
   bottom: "label"
   top: "loss2/accuracy"
   include { stage: "val" }
@@ -1750,7 +1750,7 @@
 layer {
   name: "loss2/top-5"
   type: "Accuracy"
-  bottom: "loss2/classifier"
+  bottom: "loss2a/classifier"
   bottom: "label"
   top: "loss2/accuracy-top5"
   include { stage: "val" }
@@ -2435,10 +2435,10 @@
   }
 }
 layer {
-  name: "loss3/classifier"
+  name: "loss3a/classifier"
   type: "InnerProduct"
   bottom: "pool5/7x7_s1"
-  top: "loss3/classifier"
+  top: "loss3a/classifier"
   param {
     lr_mult: 1
     decay_mult: 1
@@ -2448,7 +2448,7 @@
     decay_mult: 0
   }
   inner_product_param {
-    num_output: 1000
+    num_output: 2
     weight_filler {
       type: "xavier"
     }
@@ -2461,7 +2461,7 @@
 layer {
   name: "loss3/loss"
   type: "SoftmaxWithLoss"
-  bottom: "loss3/classifier"
+  bottom: "loss3a/classifier"
   bottom: "label"
   top: "loss"
   loss_weight: 1
@@ -2470,7 +2470,7 @@
 layer {
   name: "loss3/top-1"
   type: "Accuracy"
-  bottom: "loss3/classifier"
+  bottom: "loss3a/classifier"
   bottom: "label"
   top: "accuracy"
   include { stage: "val" }
@@ -2478,7 +2478,7 @@
 layer {
   name: "loss3/top-5"
   type: "Accuracy"
-  bottom: "loss3/classifier"
+  bottom: "loss3a/classifier"
   bottom: "label"
   top: "accuracy-top5"
   include { stage: "val" }
@@ -2489,7 +2489,7 @@
 layer {
   name: "softmax"
   type: "Softmax"
-  bottom: "loss3/classifier"
+  bottom: "loss3a/classifier"
   top: "softmax"
   include { stage: "deploy" }
 }

I’ve put the complete file in src/googlenet-customized.prototxt.

Q: "What about changes to the prototext definitions of these networks? We changed the fully connected layer name(s), and the number of categories. What else could, or should be changed, and in what circumstances?"

Great question, and it's something I'm wondering, too. For example, I know that we can "fix" certain layers so the weights don't change. Doing other things involves understanding how the layers work, which is beyond this guide, and also beyond its author at present!

Like we did with fine tuning AlexNet, we also reduce the learning rate by 10% from 0.01 to 0.001.

Q: "What other changes would make sense when fine tuning these networks? What about different numbers of epochs, batch sizes, solver types (Adam, AdaDelta, AdaGrad, etc), learning rates, policies (Exponential Decay, Inverse Decay, Sigmoid Decay, etc), step sizes, and gamma values?"

Great question, and one that I wonder about as well. I only have a vague understanding of these and it’s likely that there are improvements we can make if you know how to alter these values when training. This is something that needs better documentation.

Because GoogLeNet has a more complicated architecture than AlexNet, fine tuning it requires more time. On my laptop, it takes 10 minutes to retrain GoogLeNet with our dataset, achieving 100% accuracy and a loss of 0.0070:

Model Attempt 3

Just as we saw with the fine tuned version of AlexNet, our modified GoogLeNet performs amazing well--the best so far:

Model Attempt 3 Classify 1

Model Attempt 3 Classify 2

Model Attempt 3 Classify 3

Using our Model

With our network trained and tested, it’s time to download and use it. Each of the models we trained in DIGITS has a Download Model button, as well as a way to select different snapshots within our training run (e.g., Epoch #30):

Trained Models

Clicking Download Model downloads a tar.gz archive containing the following files:

deploy.prototxt
mean.binaryproto
solver.prototxt
info.json
original.prototxt
labels.txt
snapshot_iter_90.caffemodel
train_val.prototxt

There’s a nice description in the Caffe documentation about how to use the model we just built. It says:

A network is defined by its design (.prototxt), and its weights (.caffemodel). As a network is being trained, the current state of that network's weights are stored in a .caffemodel. With both of these we can move from the train/test phase into the production phase.

In its current state, the design of the network is not designed for deployment. Before we can release our network as a product, we often need to alter it in a few ways:

  1. Remove the data layer that was used for training, as for in the case of classification we are no longer providing labels for our data.
  2. Remove any layer that is dependent upon data labels.
  3. Set the network up to accept data.
  4. Have the network output the result.

DIGITS has already done the work for us, separating out the different versions of our prototxt files. The files we’ll care about when using this network are:

  • deploy.prototxt - the definition of our network, ready for accepting image input data
  • mean.binaryproto - our model will need us to subtract the image mean from each image that it processes, and this is the mean image.
  • labels.txt - a list of our labels (dolphin, seahorse) in case we want to print them vs. just the category number
  • snapshot_iter_90.caffemodel - these are the trained weights for our network

We can use these files in a number of ways to classify new images. For example, in our CAFFE_ROOT we can use build/examples/cpp_classification/classification.bin to classify one image:

$ cd $CAFFE_ROOT/build/examples/cpp_classification
$ ./classification.bin deploy.prototxt snapshot_iter_90.caffemodel mean.binaryproto labels.txt dolphin1.jpg

This will spit out a bunch of debug text, followed by the predictions for each of our two categories:

0.9997 - “dolphin”
0.0003 - “seahorse”

You can read the complete C++ source for this in the Caffe examples.

For a classification version that uses the Python interface, DIGITS includes a nice example. There's also a fairly well documented Python walkthrough in the Caffe examples.

Python example

Let's write a program that uses our fine-tuned GoogLeNet model to classify the untrained images we have in data/untrained-samples. I've cobbled this together based on the examples above, as well as the caffe Python module's source, which you should prefer to anything I'm about to say.

A full version of what I'm going to discuss is available in src/classify-samples.py. Let's begin!

First, we'll need the NumPy module. In a moment we'll be using NumPy to work with ndarrays, which Caffe uses a lot. If you haven't used them before, as I had not, you'd do well to begin by reading this Quickstart tutorial.

Second, we'll need to load the caffe module from our CAFFE_ROOT dir. If it's not already included in your Python environment, you can force it to load by adding it manually. Along with it we'll also import caffe's protobuf module:

import numpy as np

caffe_root = '/path/to/your/caffe_root'
sys.path.insert(0, os.path.join(caffe_root, 'python'))
import caffe
from caffe.proto import caffe_pb2

Next we need to tell Caffe whether to use the CPU or GPU. For our experiments, the CPU is fine:

caffe.set_mode_cpu()

Now we can use caffe to load our trained network. To do so, we'll need some of the files we downloaded from DIGITS, namely:

  • deploy.prototxt - our "network file", the description of the network.
  • snapshot_iter_90.caffemodel - our trained "weights"

We obviously need to provide the full path, and I'll assume that my files are in a dir called model/:

model_dir = 'model'
deploy_file = os.path.join(model_dir, 'deploy.prototxt')
weights_file = os.path.join(model_dir, 'snapshot_iter_90.caffemodel')
net = caffe.Net(deploy_file, caffe.TEST, weights=weights_file)

The caffe.Net() constructor takes a network file, a phase (caffe.TEST or caffe.TRAIN), as well as an optional weights filename. When we provide a weights file, the Net will automatically load them for us. The Net has a number of methods and attributes you can use.

Note: There is also a deprecated version of this constructor, which seems to get used often in sample code on the web. It looks like this, in case you encounter it:

net = caffe.Net(str(deploy_file), str(model_file), caffe.TEST)

We're interested in loading images of various sizes into our network for testing. As a result, we'll need to transform them into a shape that our network can use (i.e., colour, 256x256). Caffe provides the Transformer class for this purpose. We'll use it to create a transformation appropriate for our images/network:

transformer = caffe.io.Transformer({'data': net.blobs['data'].data.shape})
# set_transpose: https://github.com/BVLC/caffe/blob/61944afd4e948a4e2b4ef553919a886a8a8b8246/python/caffe/io.py#L187
transformer.set_transpose('data', (2, 0, 1))
# set_raw_scale: https://github.com/BVLC/caffe/blob/61944afd4e948a4e2b4ef553919a886a8a8b8246/python/caffe/io.py#L221
transformer.set_raw_scale('data', 255)
# set_channel_swap: https://github.com/BVLC/caffe/blob/61944afd4e948a4e2b4ef553919a886a8a8b8246/python/caffe/io.py#L203
transformer.set_channel_swap('data', (2, 1, 0))

We can also use the mean.binaryproto file DIGITS gave us to set our transformer's mean:

# This code for setting the mean from https://github.com/NVIDIA/DIGITS/tree/master/examples/classification
mean_file = os.path.join(model_dir, 'mean.binaryproto')
with open(mean_file, 'rb') as infile:
    blob = caffe_pb2.BlobProto()
    blob.MergeFromString(infile.read())
    if blob.HasField('shape'):
        blob_dims = blob.shape
        assert len(blob_dims) == 4, 'Shape should have 4 dimensions - shape is %s' % blob.shape
    elif blob.HasField('num') and blob.HasField('channels') and \
            blob.HasField('height') and blob.HasField('width'):
        blob_dims = (blob.num, blob.channels, blob.height, blob.width)
    else:
        raise ValueError('blob does not provide shape or 4d dimensions')
    pixel = np.reshape(blob.data, blob_dims[1:]).mean(1).mean(1)
    transformer.set_mean('data', pixel)

If we had a lot of labels, we might also choose to read in our labels file, which we can use later by looking up the label for a probability using its position (e.g., 0=dolphin, 1=seahorse):

labels_file = os.path.join(model_dir, 'labels.txt')
labels = np.loadtxt(labels_file, str, delimiter='\n')

Now we're ready to classify an image. We'll use caffe.io.load_image() to read our image file, then use our transformer to reshape it and set it as our network's data layer:

# Load the image from disk using caffe's built-in I/O module
image = caffe.io.load_image(fullpath)
# Preprocess the image into the proper format for feeding into the model
net.blobs['data'].data[...] = transformer.preprocess('data', image)

Q: "How could I use images (i.e., frames) from a camera or video stream instead of files?"

Great question, here's a skeleton to get you started:

import cv2
...
# Get the shape of our input data layer, so we can resize the image
input_shape = net.blobs['data'].data.shape
...
webCamCap = cv2.VideoCapture(0) # could also be a URL, filename
if webCamCap.isOpened():
    rval, frame = webCamCap.read()
else:
    rval = False

while rval:
    rval, frame = webCamCap.read()
    net.blobs['data'].data[...] = transformer.preprocess('data', frame)
    ...

webCamCap.release()

Back to our problem, we next need to run the image data through our network and read out the probabilities from our network's final 'softmax' layer, which will be in order by label category:

# Run the image's pixel data through the network
out = net.forward()
# Extract the probabilities of our two categories from the final layer
softmax_layer = out['softmax']
# Here we're converting to Python types from ndarray floats
dolphin_prob = softmax_layer.item(0)
seahorse_prob = softmax_layer.item(1)

# Print the results. I'm using labels just to show how it's done
label = labels[0] if dolphin_prob > seahorse_prob else labels[1]
filename = os.path.basename(fullpath)
print '%s is a %s dolphin=%.3f%% seahorse=%.3f%%' % (filename, label, dolphin_prob*100, seahorse_prob*100)

Running the full version of this (see src/classify-samples.py) using our fine-tuned GoogLeNet network on our data/untrained-samples images gives me the following output:

[...truncated caffe network output...]
dolphin1.jpg is a dolphin dolphin=99.968% seahorse=0.032%
dolphin2.jpg is a dolphin dolphin=99.997% seahorse=0.003%
dolphin3.jpg is a dolphin dolphin=99.943% seahorse=0.057%
seahorse1.jpg is a seahorse dolphin=0.365% seahorse=99.635%
seahorse2.jpg is a seahorse dolphin=0.000% seahorse=100.000%
seahorse3.jpg is a seahorse dolphin=0.014% seahorse=99.986%

I'm still trying to learn all the best practices for working with models in code. I wish I had more and better documented code examples, APIs, premade modules, etc to show you here. To be honest, most of the code examples I’ve found are terse, and poorly documented--Caffe’s documentation is spotty, and assumes a lot.

It seems to me like there’s an opportunity for someone to build higher-level tools on top of the Caffe interfaces for beginners and basic workflows like we've done here. It would be great if there were more simple modules in high-level languages that I could point you at that “did the right thing” with our model; someone could/should take this on, and make using Caffe models as easy as DIGITS makes training them. I’d love to have something I could use in node.js, for example. Ideally one shouldn’t be required to know so much about the internals of the model or Caffe. I haven’t used it yet, but DeepDetect looks interesting on this front, and there are likely many other tools I don’t know about.

Results

At the beginning we said that our goal was to write a program that used a neural network to correctly classify all of the images in data/untrained-samples. These are images of dolphins and seahorses that were never used in the training or validation data:

Untrained Dolphin Images

Dolphin 1 Dolphin 2 Dolphin 3

Untrained Seahorse Images

Seahorse 1 Seahorse 2 Seahorse 3

Let's look at how each of our three attempts did with this challenge:

Model Attempt 1: AlexNet from Scratch (3rd Place)

ImageDolphinSeahorseResult
dolphin1.jpg71.11%28.89%😑
dolphin2.jpg99.2%0.8%😎
dolphin3.jpg63.3%36.7%😕
seahorse1.jpg95.04%4.96%😞
seahorse2.jpg56.64%43.36😕
seahorse3.jpg7.06%92.94%😁

Model Attempt 2: Fine Tuned AlexNet (2nd Place)

ImageDolphinSeahorseResult
dolphin1.jpg99.1%0.09%😎
dolphin2.jpg99.5%0.05%😎
dolphin3.jpg91.48%8.52%😁
seahorse1.jpg0%100%😎
seahorse2.jpg0%100%😎
seahorse3.jpg0%100%😎

Model Attempt 3: Fine Tuned GoogLeNet (1st Place)

ImageDolphinSeahorseResult
dolphin1.jpg99.86%0.14%😎
dolphin2.jpg100%0%😎
dolphin3.jpg100%0%😎
seahorse1.jpg0.5%99.5%😎
seahorse2.jpg0%100%😎
seahorse3.jpg0.02%99.98%😎

Conclusion

It’s amazing how well our model works, and what’s possible by fine tuning a pretrained network. Obviously our dolphin vs. seahorse example is contrived, and the dataset overly limited--we really do want more and better data if we want our network to be robust. But since our goal was to examine the tools and workflows of neural networks, it’s turned out to be an ideal case, especially since it didn’t require expensive equipment or massive amounts of time.

Above all I hope that this experience helps to remove the overwhelming fear of getting started. Deciding whether or not it’s worth investing time in learning the theories of machine learning and neural networks is easier when you’ve been able to see it work in a small way. Now that you’ve got a setup and a working approach, you can try doing other sorts of classifications. You might also look at the other types of things you can do with Caffe and DIGITS, for example, finding objects within an image, or doing segmentation.

Have fun with machine learning!

Also available in Chinese (Traditional).
Also available in Korean.

Download Details:

Author: Humphd
Source Code: https://github.com/humphd/have-fun-with-machine-learning 
License: View license

#machinelearning #tutorial #neuralnetwork 

Have Fun with Machine Learning: A Guide for Beginners
Nigel  Uys

Nigel Uys

1672353480

Ansible-tuto: Ansible tutorial

Ansible tutorial

This tutorial presents Ansible step-by-step. You'll need to have a (virtual or physical) machine to act as an Ansible node. A Vagrant environment is provided for going through this tutorial.

Ansible is a configuration management software that lets you control and configure nodes from another machine. What makes it different from other management software is that Ansible uses (potentially existing) SSH infrastructure, while others (Chef, Puppet, ...) need a specific PKI infrastructure to be set up.

Ansible also emphasizes push mode, where configuration is pushed from a master machine (a master machine is only a machine where you can SSH to nodes from) to nodes, while most other CM typically do it the other way around (nodes pull their config at times from a master machine).

This mode is really interesting since you do not need to have a 'publicly' accessible 'master' to be able to configure remote nodes: it's the nodes that need to be accessible (we'll see later that 'hidden' nodes can pull their configuration too!), and most of the time they are.

This tutorial has been tested with Ansible 2.9.

We're also assuming you have a keypair in your ~/.ssh directory.

Quick start

  • install Vagrant if you don't have it
  • install ansible (preferably 2.10.5+ and using pip+virtualenv)
  • vagrant up
  • goto step-00

Complete explanations

Installing Ansible

The reference is the installation guide, but I strongly recommend the Using pip & virtualenv (higly recommended !) method.

Using pip & virtualenv (higly recommended !)

The best way to install Ansible (by far) is to use pip and virtual environments.

Using virtualenv will let you have multiple Ansible versions installed side by side, and test upgrades or use different versions in different projects. Also, by using a virtualenv, you won't pollute your system's python installation.

Check virtualenvwrapper for this. It makes managing virtualenvs very easy.

Under Ubuntu, installing virtualenv & virtualenvwrapper can be done like so:

sudo apt install python3-virtualenv virtualenvwrapper python3-pip
exec $SHELL

You can then create a virtualenv:

mkvirtualenv ansible-tuto
workon ansible-tuto

(mkvirtualenv usually switches you automatically to your newly created virtualenv, so here workon ansible-tuto is not strictly necessary, but lets be safe).

Then, install ansible via pip:

pip install ansible==2.7.1

(or use whatever version you want).

When you're done, you can deactivate your virtualenv to return to your system's python settings & modules:

deactivate

If you later want to return to your virtualenv:

workon ansible-tuto

Use lsvirtualenv to list all your virtual environments.

From source (if you want to hack on ansible source code)

Ansible devel branch is always usable, so we'll run straight from a git checkout. You might need to install git for this (sudo apt-get install git on Debian/Ubuntu).

git clone git://github.com/ansible/ansible.git
cd ./ansible

At this point, we can load the Ansible environment:

source ./hacking/env-setup

From a distribution package (discouraged)

sudo apt-get install ansible

From a built deb package (discouraged)

When running from an distribution package, this is absolutely not necessary. If you prefer running from an up to date Debian package, Ansible provides a make target to build it. You need a few packages to build the deb and few dependencies:

sudo apt-get install make fakeroot cdbs python-support python-yaml python-jinja2 python-paramiko python-crypto python-pip
git clone git://github.com/ansible/ansible.git
cd ./ansible
make deb
sudo dpkg -i ../ansible_x.y_all.deb (version may vary)

Cloning the tutorial

git clone https://github.com/leucos/ansible-tuto.git
cd ansible-tuto

Running the tutorials interactively with Docker

You can run the tutorials here interactively including a very simple setup with docker.

Check this repository for details.

Using Vagrant with the tutorial

It's highly recommended to use Vagrant to follow this tutorial. If you don't have it already, setting up should be quite easy and is described in step-00/README.md.

If you wish to proceed without Vagrant (not recommended!), go straight to step-01/README.md.

Contents

Terminology:

  • command or action: ansible module executed in stand-alone mode. Intro in step-02.
  • task: combines an action (a module and its arguments) with a name and optionally some other keywords (like looping directives).
  • play: a yaml structure executing a list of roles or tasks over a list of hosts
  • playbook: yaml file containing multiple plays. Intro in step-04.
  • role: an organisational unit grouping tasks together in order to achieve something (install a piece of software for instance). Intro in step-12.

Just in case you want to skip to a specific step, here is a topic table of contents.

Contributing

Thanks to all people who have contributed to this tutorial:

(and sorry if I forgot anyone)

I've been using Ansible almost since its birth, but I learned a lot in the process of writing it. If you want to jump in, it's a great way to learn, feel free to add your contributions.

The chapters being written live in the writing branch.

If you have ideas on topics that would require a chapter, please open a PR.

I'm also open on pairing for writing chapters. Drop me a note if you're interested.

If you make changes or add chapters, please fill the test/expectations file and run the tests (test/run.sh). See the test/run.sh file for (a bit) more information.

When adding a new chapter (e.g. step-NN), please issue:

cd step-99
ln -sf ../step-NN/{hosts,roles,site.yml,group_vars,host_vars} .

For typos, grammar, etc... please send a PR for the master branch directly.

Thank you!

Download Details:

Author: leucos
Source Code: https://github.com/leucos/ansible-tuto 
License: View license

#ansible #vagrant #tutorial 

Ansible-tuto: Ansible tutorial
Dipesh Malvia

Dipesh Malvia

1671100841

JWT Authentication Tutorial With Express & MongoDB | Rest API Project | Node.js for Beginners #10

JWT Authentication Tutorial With Express & MongoDB | Rest API Project | Node.js for Beginners #10

In this video we will continue to build our contact management Rest API project using Express & MongoDb. We will build user registration and login endpoints. We will see how to hash raw passwords and add authentication using JWT sign and verify access token along with protecting routes. 

⭐️ Support my channel⭐️ https://www.buymeacoffee.com/dipeshmalvia

⭐️ GitHub link for Reference ⭐️ https://github.com/dmalvia/Express_MongoDB_Rest_API_Tutorial

⭐️ Node.js for beginners Playlist ⭐️ https://youtube.com/playlist?list=PLTP3E5bPW796_icZanMqhdg7i0Cl7Y51F

🔥 Video contents... ENJOY 👇 

  • 0:00:00 - Intro 
  • 0:00:38 - Adding User Routes - Registration, Login & Current 
  • 0:04:22 - Adding User Controller 
  • 0:07:53 - Mongoose Schema for User 
  • 0:10:40 - User Registration & Password Hashing 
  • 0:18:28 - What is JWT ? 
  • 0:19:45 - User Login & JWT Access Token 
  • 0:26:44 - Protecting Routes - User 
  • 0:28:08 - Verify JWT Token Middleware 
  • 0:37:43 - Handle Relationship User & Contact Schema 
  • 0:39:02 - Protecting Routes - Contact 
  • 0:40:11 - Logged in User Get All Contacts 
  • 0:41:28 - Logged in User Create New Contact 
  • 0:45:02 - Logged in User Update Contact 
  • 0:46:17 - Logged in User Delete Contact 
  • 0:50:00 - Outro 

⭐️ JavaScript ⭐️ 

🔗 Social Medias 🔗 

⭐️ Tags ⭐️ - Node.js, Express & MongoDB Project - Build Rest API Project Express & MongoDB - JWT & EXPRESS Authentication Crash Course - Express Project For Beginners 

⭐️ Hashtags ⭐️ #nodejs #express #beginners #tutorial 

Disclaimer: It doesn't feel good to have a disclaimer in every video but this is how the world is right now. All videos are for educational purpose and use them wisely. Any video may have a slight mistake, please take decisions based on your research. This video is not forcing anything on you.

https://youtu.be/ICMnoKxlYYg

JWT Authentication Tutorial With Express & MongoDB | Rest API Project | Node.js for Beginners #10
Dipesh Malvia

Dipesh Malvia

1670502174

Build Rest Api Project With Express & MongoDB | CRUD API | Node.js Tutorial for Beginners #8

Build Rest Api Project With Express & MongoDB | CRUD API | Node.js Tutorial for Beginners #9 

In this video we will continue to build our contact management Rest API project using Express & MongoDb. And we will implement project wide error handling, MongoDB setup and CRUD operations of our contacts resource. 

⭐️ Support my channel⭐️ https://www.buymeacoffee.com/dipeshmalvia

⭐️ GitHub link for Reference ⭐️ https://github.com/dmalvia/Express_MongoDB_Rest_API_Tutorial

⭐️ Node.js for beginners Playlist ⭐️ https://youtube.com/playlist?list=PLTP3E5bPW796_icZanMqhdg7i0Cl7Y51F 

🔥 Video contents... ENJOY 👇 

  • 0:00:00 - Intro 
  • 0:00:36 - Error Handling Middleware 
  • 0:06:56 - Express Async Handler 
  • 0:09:04 - MongoDb Setup 
  • 0:13:34 - Connect Express App to MondoDB Database 
  • 0:17:40 - Mongoose Schema for Contacts 
  • 0:20:10 - CRUD Get All Contacts 
  • 0:21:26 - CRUD Create New Contact 
  • 0:23:42 - CRUD Get Contact 
  • 0:25:08 - CRUD Update Contact 
  • 0:26:50 - CRUD Delete Contact 
  • 0:28:00 - Outro 

⭐️ JavaScript ⭐️ 

🔗 Social Medias 🔗 

⭐️ Tags ⭐️ - Node.js, Express & MongoDB Project - Build Rest API Project Express & MongoDB - Express CRUD API Tutorial - Node.Js & Express Crash Course 

⭐️ Hashtags ⭐️ #nodejs #express #beginners #tutorial 

Disclaimer: It doesn't feel good to have a disclaimer in every video but this is how the world is right now. All videos are for educational purpose and use them wisely. Any video may have a slight mistake, please take decisions based on your research. This video is not forcing anything on you.

https://youtu.be/niw5KSO94YI

Build Rest Api Project With Express & MongoDB | CRUD API | Node.js Tutorial for Beginners #8
Nat  Grady

Nat Grady

1670441700

How to Control Your Climate with This Raspberry Pi Thermostat Tutorial

Smart homes are the future, but what do you do if you have an old air conditioner or heater in your home? Replacing old devices isn’t always feasible, but you can automate them with a Raspberry Pi.

The air conditioning in many homes lacks modern niceties like central automation, programmable thermostats, multiple sensors, or Wi-Fi control. But older air-conditioning tech is still reliable, so in many cases, it’s unlikely to be upgraded soon.

That, however, requires users to frequently interrupt work or sleep to turn an air conditioner on or off. This is particularly true in houses with tight layouts, like mine:

 

A floor plan with an air-conditioning unit at the top, to the right of center. Its output has to round two corners to reach most rooms, including the bedroom at the bottom left.

My unorthodox floor plan makes cooling with a single in-window air conditioning unit a challenge. There is no direct line of sight for remote control from the bedroom and no direct path for cool air to reach all the rooms.

US homes commonly have central air conditioning, but this isn’t the case globally. Not having central AC limits automation options, making it more difficult to achieve the same temperature throughout the whole home. In particular, it makes it hard to avoid temperature fluctuations that may require manual intervention to address.

As an engineer and Internet of Things (IoT) enthusiast, I saw an opportunity to do a few useful things at once:

  • Help conserve energy by improving the efficiency of my stand-alone air-conditioning unit
  • Make my home more comfortable through automation and Google Home integration
  • Customize my solution exactly the way I wanted it, instead of being limited to commercially available options
  • Brush up on some of my professional skills, using tried and tested hardware

My air conditioner is a basic device with a simple infrared remote control. I was aware of devices that enable air-conditioning units to be used with smart home systems, such as Sensibo or Tado. Instead, I took a DIY approach and created a Raspberry Pi thermostat, allowing for more sophisticated control based on sensor input from various rooms.

Raspberry Pi Thermostat Hardware

I was already using several Raspberry Pi Zero Ws, coupled with DHT22 sensor modules, to monitor the temperature and humidity in different rooms. Because of the segmented floor plan, I installed the sensors to monitor how warm it was in different parts of my house.

I also have a home surveillance system (not required for this project) on a Windows 10 PC with WSL 2. I wanted to integrate the sensor readings into the surveillance videos, as a text overlay on the video feed.

Wiring the Sensor

The sensors were straightforward to wire, having only three connections:

 

The first connection is "VCC from sensor to PIN1 - 3v3," the second is "DATA from sensor to PIN7 - GPIO4", and the third is "GND from sensor to PIN9 - GND."

A wiring diagram for the DHT22 module, showing the pins used to connect it to the Raspberry Pi.

I used Raspberry Pi OS Lite, installing Python 3 with PiP and the Adafruit_DHT library for Python to read the sensor data. It’s technically deprecated but simpler to install and use. Plus, it requires fewer resources for our use case.

I also wanted to have a log of all the readings so I used a third-party server, ThingSpeak, to host my data and serve it via API calls. It’s relatively straightforward, and since I did not need real-time readings, I opted to send data every five minutes.

import requests
import time
import random
import Adafruit_DHT
KEY = 'api key'
def pushData(temp:float, hum:float):
        '''Takes temp and humidity and pushes to ThingsSpeak'''
        url = 'https://api.thingspeak.com/update'
        params = {'api_key': KEY, 'field5': temp, 'field6': hum}
        res = requests.get(url, params=params)
def getData(sensor:int, pin:int):
    '''
    Input DHT sensor type and RPi GPIO pin to collect a sample of data

    Parameters:
    sensor: Either 11 or 22, depending on sensor used (DHT11 or DHT22)
    pin: GPIO pin used (e.g. 4)
    '''
    try:
        humidity, temperature = Adafruit_DHT.read_retry(sensor, pin)
        return humidity, temperature
    except:
        Exception("Error reading sensor data")
        return False
if __name__ == "__main__":
    sensor = 22    # Change to 11 if using DHT11
    pin = 4 # I used GPIO pin 4
    while True:
        h, t = getData(sensor, pin)
        pushData(t, h)
        time.sleep(300)

On my dedicated surveillance PC, running WSL 2, I set up a PHP script that fetches the data from ThingSpeak, formats it, and writes it in a simple .txt file. This .txt file is needed for my surveillance software to overlay it on top of the video stream.

Because I had some automation in the house already, including smart light bulbs and several routines in Google Home, it followed that I would use the sensor data as a smart thermostat in Google Home. My plan was to create a Google Home routine that would turn the air conditioning on or off automatically based on room temperature, without the need for user input.

 

A photograph of a black puck-shaped device.

The PNI SafeHome PT11IR Wi-Fi smart remote control unit.

 

Pricier all-in-one solutions like those from Sensibo and Tado require less technical setup, but for a fraction of the cost, the PNI SafeHome PT11IR enabled me to use my phone to control any number of infrared devices within its range. The control app, Tuya, integrates with Google Home.

Overcoming Google Home Integration Issues

With a smart-enabled air conditioner and sensor data available, I tried to get the Raspberry recognized as a thermostat in Google Home but to no avail. I was able to send the sensor data to Google IoT Cloud and its Pub/Sub service, but there was no way to send it to Google Home to create a routine based on that data.

After pondering this for a few days, I thought of a new approach. What if I didn’t need to send the data to Google Home? What if I could check the data locally and send a command to Google Home to turn the air conditioner on or off? I tested voice commands with success, so this approach seemed promising.

A quick search turned up Assistant Relay, a Node.js-powered system that enables a user to send commands to Google Assistant, allowing the user to tie anything to Google Assistant as long as it knows what to do with the input it receives.

Even better, with Assistant Relay, I could end commands to my Google Assistant by simply sending POST requests to the device running the Node.js server (in this case, my Raspberry Pi Zero W) with some required parameters. That’s it. The script is well documented so I won’t get into much detail here.

Since the sensor data was already being read on the surveillance PC, I figured I could integrate the request into the PHP script to keep things in one place.

Since you likely don’t have the .txt file requirement, you can simplify the process by directly reading the sensor data and issuing commands based on that data to the Google Assistant Service, via Assistant Relay. All of this can be done from a single Raspberry Pi device, without the need for additional hardware. However, as I already had completed half of the work, it made sense to use what I had. Both scripts in this article can be used on a single machine; furthermore, the PHP script can be rewritten in Python, if needed.

Setting Conditions and Automating Operation

I wanted the automatic power cycling to happen only during nighttime, so I defined the hours for which I wanted to automate operation—10 PM to 7 AM—and set the preferred temperature. Identifying the correct temperature intervals—to achieve a comfortable range without shortening the life span of the air-conditioning unit by cycling its power too often—required a few tries to get it right.

The PHP script that created the sensor data overlay was set up to run every five minutes via a cron job, so the only things I added to it were the conditions and the POST request.

However, this created an issue. If the conditions were met, the script would send a “turn on” command every five minutes, even if the air conditioning was already on. This caused the unit to beep annoyingly, even on the “turn off” command. To fix this, I needed a way to read the current status of the unit.

Elegance wasn’t a priority, so I made a JSON file containing an array. Whenever the “turn on” or “turn off” commands would complete successfully, the script would then append the last status to this array. This solved redundancy; however, particularly hot days or excessive heating during the winter could cause the conditions to be met again. I decided a manual override would suffice in these situations. I’ll leave adding a return before the switch snippet to this end as an exercise for the reader:

<?php

switch(true)
{
    case $temperature > 27:
        turnAc('on');
        break;
    case $temperature < 24:
        turnAc('off');
        break;
}

function turnAc($status)
{
    $command = 'turn on hallway ac'; // hallway ac is the Google Home device name for my AC
    if ($status == 'off')
    {
        $command = 'turn off hallway ac';
    }

    if ($status == 'on' && checkAc() == 'on')
    {
        return;
    }

    if ($status == 'off' && checkAc() == 'off')
    {
        return;
    }

    $curl = curl_init();
    curl_setopt_array($curl, array(
      CURLOPT_URL => 'local assistant server ip',
      CURLOPT_RETURNTRANSFER => true,
      CURLOPT_ENCODING => '',
      CURLOPT_MAXREDIRS => 10,
      CURLOPT_TIMEOUT => 0,
      CURLOPT_FOLLOWLOCATION => true,
      CURLOPT_HTTP_VERSION => CURL_HTTP_VERSION_1_1,
      CURLOPT_CUSTOMREQUEST => 'POST',
      CURLOPT_POSTFIELDS =>'{
        "command": '.$command.',
        "converse": false,
        "user": "designated user"
        }',
      CURLOPT_HTTPHEADER => array(
        'Content-Type: application/json'
      ),
    ));

    $response = curl_exec($curl);
    curl_close($curl);
    $obj = null;

    try {
        $obj = json_decode($response);
    } catch (Exception $e) {
    }

    if (!$obj || $obj->success != true)
    {
        markAc($status == 'on' ? 'off' : 'on'); // if error, mark it as opposite status
        return;
    }

    markAc($status);
}

function markAc($status)
{
    $file = __DIR__ . "/markAc.json";
    $json = json_decode(file_get_contents($file), true);
    $json[] = array(date('F j, Y H:i:s'), $status);

    $handler = fopen($file, "w") or die("Unable to open file!");
    $txt = json_encode($json);
    fwrite($handler, $txt);
    fclose($handler);
}

function checkAc()
{
    $file = __DIR__ . "/markAc.json";
    $json = json_decode(file_get_contents($file), true);
    $end = array_pop($json);
    return $end[1];
}

This worked but not on the first attempt. I had to figure out things along the way and tweak them as needed. Hopefully, with the benefit of my experience, you won’t need to do as much to get it right the first time.

The Value of a Raspberry Pi Thermostat Controller

I was motivated to automate my air conditioning because the unconventional layout of my home sometimes resulted in vastly different temperatures in different rooms. But automating heating and cooling has benefits even for those who don’t face this particular issue.

People across the world live in various climates and pay different prices for energy (and different rates at different times of the day), so even modest improvements in energy efficiency can make automation worthwhile in certain regions.

Furthermore, as more and more homes become automated, there is reason to explore the potential of automating older power-hungry devices and appliances such as air conditioners, electric heaters, and water heaters. Because these devices are typically bulky, difficult to install, and expensive to upgrade, many people will be stuck with them for years to come. Making these “dumb” devices a bit smarter can not only improve comfort and energy efficiency but also extend their life spans.

Original article source at: https://www.toptal.com/

#raspberrypi #tutorial 

How to Control Your Climate with This Raspberry Pi Thermostat Tutorial
Bongani  Ngema

Bongani Ngema

1670357880

A Finite-state Machine Tutorial: Unity AI Development

Ever wonder how game developers deliver entertaining interplay with the non-player characters they create? Learn how to develop them yourself in our finite-state machine tutorial.

In the competitive world of gaming, developers strive to offer an entertaining user experience for those who interact with the non-player characters (NPCs) that we create. Developers can deliver this interactivity by using finite-state machines (FSMs) to create AI solutions that simulate intelligence in our NPCs.

AI trends have shifted to behavioral trees, but FSMs remain relevant. They’re incorporated—in one capacity or another—into virtually every electronic game.

Anatomy of an FSM

An FSM is a model of computation in which only one of a finite number of hypothetical states can be active at one time. An FSM transitions from one state to another, responding to conditions or inputs. Its core components include:

ComponentDescription
StateOne of a finite set of options indicating the current overall condition of an FSM; any given state includes an associated set of actions
ActionWhat a state does when the FSM queries it
DecisionThe logic establishing when a transition takes place
TransitionThe process of changing states

While we will focus on FSMs from the perspective of AI implementation, concepts such as animation state machines and general game states also fall under the FSM umbrella.

Visualizing an FSM

Let’s consider the example of the classic arcade game Pac-Man. In the game’s initial state (the “chase” state), the NPCs are colorful ghosts that pursue and eventually outpace the player. The ghosts transition into the evade state whenever the player eats a power pellet and experiences a power-up, gaining the ability to eat the ghosts. The ghosts, now blue in color, evade the player until the power-up times out and the ghosts transition back to the chase state, in which their original behaviors and colors are restored.

A Pac-Man ghost is always in one of two states: chase or evade. Naturally, we must provide two transitions—one from chase to evade, the other from evade to chase:

 

Diagram: At left is the chase state. An arrow (indicating that the player ate the power pellet) leads to the evade state at right. A second arrow (indicating that the power pellet timed out) leads back to the chase state at left.

Transitions Between Pac-Man Ghost States

 

The finite-state machine, by design, queries the current state, which queries the decision(s) and action(s) of that state. The following diagram represents our Pac-Man example and shows a decision that checks the status of the player’s power-up. If a power-up has begun, the NPCs transition from chase to evade. If a power-up has ended, the NPCs transition from evade to chase. Finally, if there is no power-up change, no transition occurs.

 

Diamond-shaped diagram representing a cycle: Beginning at the left, there is a chase state implying a corresponding action. The chase state then points to the top, where there is a decision: If the player ate a power pellet, we continue to the evade state and evade action at the right. The evade state points to a decision at the bottom: If the power pellet timed out, we continue back to our starting point.

Components of the Pac-Man Ghost FSM

 

Scalability

FSMs free us to build modular AI. For instance, with just a single new action, we can create an NPC with a new behavior. Thus, we can ascribe a new action—the eating of a power pellet—to one of our Pac-Man ghosts, giving it the ability to eat power pellets while evading the player. We can reuse existing actions, decisions, and transitions to support this behavior.

Since the resources required to develop a unique NPC are minimal, we are well positioned to meet the evolving project requirements of multiple unique NPCs. On the other hand, an excessive number of states and transitions can get us tangled up in a spaghetti-state machine—an FSM whose overabundance of connections makes it difficult to debug and maintain.

Implementing an FSM in Unity

To demonstrate how to implement a finite-state machine in Unity, let’s create a simple stealth game. Our architecture will incorporate ScriptableObjects, which are data containers that can store and share information throughout the application, so that we do not need to reproduce it. ScriptableObjects are capable of limited processing, such as invoking actions and querying decisions. In addition to Unity’s official documentation, the older Game Architecture with Scriptable Objects talk remains an excellent resource if you want to dive deeper.

Before we add AI to this initial ready-to-compile project, consider the proposed architecture:

 

Diagram: Seven boxes that connect to one another, described in order of appearance, from left/top: The box labeled BaseStateMachine includes + CurrentState: BaseState. BaseStateMachine connects to BaseState with a bidirectional arrow. The box labeled BaseState includes + Execute(BaseStateMachine): void. BaseState connects to BaseStateMachine with a bidirectional arrow. Monodirectional arrows from State and RemainInState connect to BaseState. The box labeled State includes + Execute(BaseStateMachine): void, + Actions: List&lt;Action&gt;, and + Transition: List&lt;Transition&gt;. State connects to BaseState with a monodirectional arrow, to Action with a monodirectional arrow labeled "1," and to Transition with a monodirectional arrow labeled "1." The box labeled RemainInState includes + Execute(BaseStateMachine): void. RemainInState connects to BaseState with a monodirectional arrow. The box labeled Action includes + Execute(BaseStateMachine): void. A monodirectional arrow labeled "1" from State connects to Action. The box labeled Transition includes + Decide(BaseStateMachine): void, + TransitionDecision: Decision, + TrueState: BaseState, and + FalseState: BaseState. Transition connects to Decision with a monodirectional arrow. A monodirectional arrow labeled "1" from State connects to Transition. The box labeled Decision includes + Decide(BaseStateMachine): bool.

Proposed FSM Architecture

 

In our sample game, the enemy (an NPC represented by a blue capsule) patrols. When the enemy sees the player (represented by a gray capsule), the enemy starts following the player:

 

 

Diagram: Five boxes that connect to one another, described in order of appearance, from left/top: The box labeled Patrol connects to the box labeled IF player is in line of sight with a monodirectional arrow, and to the box labeled Patrol Action with a monodirectional arrow that is labeled "state." The box labeled IF player is in line of sight, with an additional elabel "decision," just below the box. The box labeled IF player is in line of sight connects to the box labeled Chase with a monodirectional arrow. A monodirectional arrow from the box labeled Patrol connects to the box labeled IF player is in line of sight. The box labeled Chase connects to the box labeled Chase Action with a monodirectional arrow that is labeled "state." A monodirectional arrow from the box labeled IF player is in line of sight connects to the box labeled Chase. A monodirectional arrow arrow from the box labeled Patrol connects to the box labeled Patrol Action. A monodirectional arrow arrow from the box labeled Chase connects to the box labeled Chase Action.

Core Components of Our Sample Stealth Game FSM

 

In contrast with Pac-Man, the enemy in our game will not return to the default state (“patrol”) once it follows the player.

Creating Classes

Let’s begin by creating our classes. In a new scripts folder, we will add all of the proposed architectural building blocks as C# scripts.

Implementing the BaseStateMachine Class

The BaseStateMachine class is the only MonoBehavior that we will add to access our AI-enabled NPCs. For simplicity’s sake, our BaseStateMachine will be bare-bones. If we wanted to, however, we could add an inherited custom FSM that stores additional parameters and references to additional components. Note that the code will not compile properly until we have added our BaseState class, which we’ll do later in our tutorial.

The code for BaseStateMachine refers to and executes the current state to perform the actions and see if a transition is warranted:

using UnityEngine;

namespace Demo.FSM
{
    public class BaseStateMachine : MonoBehaviour
    {
        [SerializeField] private BaseState _initialState;

        private void Awake()
        {
            CurrentState = _initialState;
        }

        public BaseState CurrentState { get; set; }

        private void Update()
        {
            CurrentState.Execute(this);
        }
    }
}

Implementing the BaseState Class

Our state is of the type BaseState, which we derive from a ScriptableObject. BaseState includes a single method, Execute, taking BaseStateMachine as its argument and passing to it actions and transitions. This is how BaseState looks:

using UnityEngine;

namespace Demo.FSM
{
    public class BaseState : ScriptableObject
    {
        public virtual void Execute(BaseStateMachine machine) { }
    }
}

Implementing the State and RemainInState Classes

We now derive two classes from BaseState. First, we have the State class, which stores references to actions and transitions, includes two lists (one for actions, the other for transitions), and overrides and calls the base Execute on actions and transitions:

using System.Collections.Generic;
using UnityEngine;

namespace Demo.FSM
{
    [CreateAssetMenu(menuName = "FSM/State")]
    public sealed class State : BaseState
    {
        public List<FSMAction> Action = new List<FSMAction>();
        public List<Transition> Transitions = new List<Transition>();

        public override void Execute(BaseStateMachine machine)
        {
            foreach (var action in Action)
                action.Execute(machine);

            foreach(var transition in Transitions)
                transition.Execute(machine);
        }
    }
}

Second, we have the RemainInState class, which tells the FSM when not to perform a transition:

using UnityEngine;

namespace Demo.FSM
{
    [CreateAssetMenu(menuName = "FSM/Remain In State", fileName = "RemainInState")]
    public sealed class RemainInState : BaseState
    {
        
    }
}

Note that these classes will not compile until we have added the FSMAction, Decision, and Transition classes.

Implementing the FSMAction Class

In the Proposed FSM Architecture diagram, the base FSMAction class is labeled “Action.” However, we will create the base FSMAction class and use the name FSMAction (since Action is already in use by the .NET System namespace).

FSMAction, a ScriptableObject, cannot process functions independently, so we will define it as an abstract class. As our development progresses, we may require a single action to serve more than one state. Fortunately, we can associate FSMAction with as many states from as many FSMs as we wish.

The FSMAction abstract class looks like this:

using UnityEngine;

namespace Demo.FSM
{
    public abstract class FSMAction : ScriptableObject
    {
        public abstract void Execute(BaseStateMachine stateMachine);
    }
}

Implementing the Decision and Transition Classes

To finish up our FSM, we will define two more classes. First, we have Decision, an abstract class from which all other decisions would define their custom behavior:

using UnityEngine;

namespace Demo.FSM
{
    public abstract class Decision : ScriptableObject
    {
        public abstract bool Decide(BaseStateMachine state);
    }
}

The second class, Transition, contains the Decision object and two states:

  • A state to transition to if the Decision yields true.
  • Another state to transition to if the Decision yields false.

It looks like this:

using UnityEngine;

namespace Demo.FSM
{
    [CreateAssetMenu(menuName = "FSM/Transition")]
    public sealed class Transition : ScriptableObject
    {
        public Decision Decision;
        public BaseState TrueState;
        public BaseState FalseState;

        public void Execute(BaseStateMachine stateMachine)
        {
            if(Decision.Decide(stateMachine) && !(TrueState is RemainInState))
                stateMachine.CurrentState = TrueState;
            else if(!(FalseState is RemainInState))
                stateMachine.CurrentState = FalseState;
        }
    }
}

Everything we have built up to this point should compile without any errors. If you experience issues, check your Unity Editor version, which can cause errors if out of date. Ensure that all files have been properly cloned from the original project folder and that all publicly accessed variables are not declared private.

Creating Custom Actions and Decisions

Now, with the heavy lifting done, we are ready to implement custom actions and decisions in a new scripts folder.

Implementing the Patrol and Chase Classes

When we analyze the Core Components of Our Sample Stealth Game FSM diagram, we see that our NPC can be in one of two states:

  1. Patrol state — Associated with the state are:
    • One action: NPC visits random patrol points around the world.
    • One transition: NPC checks whether the player is in sight and, if so, transitions to the chase state.
    • One decision: NPC checks whether the player is in sight.
  2. Chase state — Associated with the state is:
    • One action: NPC chases the player.

We can reuse our existing transition implementation via Unity’s GUI, as we’ll discuss later. This leaves two actions (PatrolAction and ChaseAction) and a decision for us to code.

The patrol state action (which derives from the base FSMAction) overrides the Execute method to get two components:

  1. PatrolPoints, which tracks patrol points.
  2. NavMeshAgent, Unity’s implementation for navigation in 3D space.

The override then checks whether the AI agent has reached its destination and, if so, moves to the next destination. It looks like this:

using Demo.Enemy;
using Demo.FSM;
using UnityEngine;
using UnityEngine.AI;

namespace Demo.MyFSM
{
    [CreateAssetMenu(menuName = "FSM/Actions/Patrol")]
    public class PatrolAction : FSMAction
    {
        public override void Execute(BaseStateMachine stateMachine)
        {
            var navMeshAgent = stateMachine.GetComponent<NavMeshAgent>();
            var patrolPoints = stateMachine.GetComponent<PatrolPoints>();

            if (patrolPoints.HasReached(navMeshAgent))
                navMeshAgent.SetDestination(patrolPoints.GetNext().position);
        }
    }
}

We may want to consider caching the PatrolPoints and NavMeshAgent components. Caching would allow us to share ScriptableObjects for actions among agents without the performance impact of running GetComponent on each query of the finite-state machine.

To be clear, we cannot cache component instances in the Execute method. So instead, we’ll add a custom GetComponent method to BaseStateMachine. Our custom GetComponent would cache the instance the first time it is called, returning the cached instance on consecutive calls. For reference, this is the implementation of BaseStateMachine with caching:

using System;
using System.Collections.Generic;
using UnityEngine;

namespace Demo.FSM
{
    public class BaseStateMachine : MonoBehaviour
    {
        [SerializeField] private BaseState _initialState;
        private Dictionary<Type, Component> _cachedComponents;
        private void Awake()
        {
            CurrentState = _initialState;
            _cachedComponents = new Dictionary<Type, Component>();
        }

        public BaseState CurrentState { get; set; }

        private void Update()
        {
            CurrentState.Execute(this);
        }

        public new T GetComponent<T>() where T : Component
        {
            if(_cachedComponents.ContainsKey(typeof(T)))
                return _cachedComponents[typeof(T)] as T;

            var component = base.GetComponent<T>();
            if(component != null)
            {
                _cachedComponents.Add(typeof(T), component);
            }
            return component;
        }

    }
}

Like its counterpart PatrolAction, the ChaseAction class overrides the Execute method to get PatrolPoints and NavMeshAgent components. In contrast, however, after checking whether the AI agent has reached its destination, the ChaseAction class action sets the destination to Player.position:

using Demo.Enemy;
using Demo.FSM;
using UnityEngine;
using UnityEngine.AI;

namespace Demo.MyFSM
{
    [CreateAssetMenu(menuName = "FSM/Actions/Chase")]
    public class ChaseAction : FSMAction
    {
        public override void Execute(BaseStateMachine stateMachine)
        {
            var navMeshAgent = stateMachine.GetComponent<NavMeshAgent>();
            var enemySightSensor = stateMachine.GetComponent<EnemySightSensor>();

            navMeshAgent.SetDestination(enemySightSensor.Player.position);
        }
    }
}

Implementing the InLineOfSightDecision Class

The final piece is the InLineOfSightDecision class, which inherits the base Decision and gets the EnemySightSensor component to check if the player is in the line of sight of the NPC:

using Demo.Enemy;
using Demo.FSM;
using UnityEngine;
namespace Demo.MyFSM
{
    [CreateAssetMenu(menuName = "FSM/Decisions/In Line Of Sight")]
    public class InLineOfSightDecision : Decision
    {
        public override bool Decide(BaseStateMachine stateMachine)
        {
            var enemyInLineOfSight = stateMachine.GetComponent<EnemySightSensor>();
            return enemyInLineOfSight.Ping();
        }
    }
}

Attaching Behaviors to States

We are finally ready to attach behaviors to the Enemy agent. These are created in the Unity Editor’s Project window.

Adding the Patrol and Chase States

Let’s create two states and name them “Patrol” and “Chase”:

  • Right Click > Create > FSM > State

While here, let’s also create a RemainInState object:

  • Right Click > Create > FSM > Remain In State

Now, it’s time to create the actions we just coded:

  • Right Click > Create > FSM > Action > Patrol
  • Right Click > Create > FSM > Action > Chase

To code the Decision:

  • Right Click > Create > FSM > Decisions > In Line of Sight

To enable a transition from PatrolState to ChaseState, let’s first create the transition scriptable object:

  • Right Click > Create > FSM > Transition
  • Choose a name you like. I called mine Spotted Enemy.

We’ll populate the resulting inspector window as follows:

 

Spotted Enemy (Transition) screen includes four lines: Script's value is set to "Transition" and is grayed out. Decision's value is set to "LineOfSightDecision (In Line Of Sight)." True State's value is set to "ChaseState (State)." False State's value is set to "RemainInState (Remain In State)."

Filling Out the Spotted Enemy (Transition) Inspector Window

 

Then we’ll complete the Chase State inspector dialog as follows:

 

Chase State (State) screen begins with a label "Open." Beside the label "Script" "State" is selected. Beside the "Action" label, "1" is selected. From the "Action" dropdown, "Element 0 Chase Action (Chase Action)" is selected. There is a plus sign and minus sign that follows. Beside the "Transitions" label, "0" is selected. From the "Transitions" dropdown, "List is Empty" displays. There is a plus sign and minus sign that follows.

Filling Out the Chase State Inspector Window

 

Next, we’ll complete the Patrol State dialog:

 

The Patrol State (State) screen begins with a label "Open." Beside the label "Script" "State" is selected. Beside the "Action" label, "1" is selected. From the "Action" dropdown, "Element 0 Patrol Action (Patrol Action)" is selected. There is a plus and minus sign that follows. Beside the "Transitions" label, "1" is selected. From the "Transitions" dropdown, "Element 0 SpottedEnemy (Transition)" displays. There is a plus sign and minus sign that follows.

Filling Out the Patrol State Inspector Window

 

Finally, we’ll add the BaseStateMachine component to the enemy object: In the Unity Editor’s Project window, open the SampleScene asset, select the Enemy object from the Hierarchy panel, and, in the Inspector window, select Add Component > Base State Machine:

 

The Base State Machine (Script) screen: Beside the grayed out "Script" label, "BaseStateMachine" is selected and grayed out. Beside the "Initial State" label, "PatrolState (State)" is selected.

Adding the Base State Machine (Script) Component

For any issues, double-check that your game objects are configured correctly. For example, confirm that the Enemy object includes the PatrolPoints script component and objects Point1, Point2, etc. This information can be lost with incorrect editor versioning.

Now you are ready to play the sample game and observe that the enemy will follow the player when the player steps into the enemy’s line of sight.

Using FSMs to Create a Fun, Interactive User Experience

In this finite-state machine tutorial, we created a highly modular FSM-based AI (and corresponding GitHub repo) that we can reuse in future projects. Thanks to this modularity, we can always add power to our AI by introducing new components.

But our architecture also paves the way for graphical-first FSM design, which would elevate our developer experience to a new level of professionalism. We could then create FSMs for our games more rapidly—and with better creative accuracy.

Original article source at: https://www.toptal.com/

#machine #tutorial #ai 

A Finite-state Machine Tutorial: Unity AI Development

An xNode-based Graphical FSM Tutorial

In “Unity AI Development: A Finite-state Machine Tutorial,” we created a simple stealth game—a modular FSM-based AI. In the game, an enemy agent patrols the gamespace. When it spots the player, the enemy changes its state and follows the player instead of patrolling.

In this second leg of our Unity journey, we will build a graphical user interface (GUI) to create the core components of our finite-state machine (FSM) more rapidly, and with an improved developer experience.

Let’s Refresh

The FSM detailed in the previous tutorial was built of architectural blocks as C# scripts. We added custom ScriptableObject actions and decisions as classes. Our ScriptableObject approach allowed us to have an easily maintainable and customizable FSM. In this tutorial, we replace our FSM’s drag-and-drop ScriptableObjects with a graphical option.

In your game, if you’d like for the player to win more easily, replace the player detection script with this updated script that narrows the enemy’s field of vision.

Getting Started With xNode

We’ll build our graphical editor using xNode, a framework for node-based behavior trees that will display our FSM’s flow visually. Although Unity’s GraphView can accomplish the job, its API is both experimental and meagerly documented. xNode’s user interface delivers a superior developer experience, facilitating the prototyping and rapid expansion of our FSM.

Let’s add xNode to our project as a Git dependency using the Unity Package Manager:

  1. In Unity, click Window > Package Manager to launch the Package Manager window.
  2. Click + (the plus sign) at the window’s top-left corner and select Add package from git URL to display a text field.
  3. Type or paste https://github.com/siccity/xNode.git in the unlabeled text box and click the Add button.

Now we’re ready to dive deep and understand the key components of xNode:

Node classRepresents a node, a graph's most fundamental unit. In this xNode tutorial, we derive from the Node class new classes that declare nodes equipped with custom functionality and roles.
NodeGraph classRepresents a collection of nodes (Node class instances) and the edges that connect them. In this xNode tutorial, we derive from NodeGraph a new class that manipulates and evaluates the nodes.
NodePort classRepresents a communication gate, a port of type input or type output, located between Node instances in a NodeGraph. The NodePort class is unique to xNode.
[Input] attributeThe addition of the [Input] attribute to a port designates it as an input, enabling the port to pass values to the node it is part of. Think of the [Input] attribute as a function parameter.
[Output] attributeThe addition of the [Output] attribute to a port designates it as an output, enabling the port to pass values from the node it is part of. Think of the [Output] attribute as the return value of a function.

Visualizing the xNode Building Environment

In xNode, we work with graphs where each State and Transition takes the form of a node. Input and/or output connection(s) enable the node to relate to any or all other nodes in our graph.

Let’s imagine a node with three input values: two arbitrary and one boolean. The node will output one of the two arbitrary-type input values, depending on whether the boolean input is true or false.

 

The Branch node, represented by a large rectangle at center, includes the pseudocode "If C == True A Else B." On the left are three rectangles, each of which have an arrow that points to the Branch node: "A (arbitrary)," "B (arbitrary)," and "C (boolean)." The Branch node, finally, has an arrow that points to an "Output" rectangle.

An example Branch Node

 

To convert our existing FSM to a graph, we modify the State and Transition classes to inherit the Node class instead of the ScriptableObject class. We create a graph object of type NodeGraph to contain all of our State and Transition objects.

Modifying BaseStateMachine to Use As a Base Type

We’ll begin building our graphical interface by adding two new virtual methods to our existing BaseStateMachine class:

InitAssigns the initial state to the CurrentState property
ExecuteExecutes the current state

Declaring these methods as virtual allows us to override them, so we can define the custom behaviors of classes inheriting the BaseStateMachine class for initialization and execution:

using System;
using System.Collections.Generic;
using UnityEngine;

namespace Demo.FSM
{
    public class BaseStateMachine : MonoBehaviour
    {
        [SerializeField] private BaseState _initialState;
        private Dictionary<Type, Component> _cachedComponents;
        private void Awake()
        {
            Init();
            _cachedComponents = new Dictionary<Type, Component>();
        }

        public BaseState CurrentState { get; set; }

        private void Update()
        {
            Execute();
        }

        public virtual void Init()
        {
            CurrentState = _initialState;
        }

        public virtual void Execute()
        {
            CurrentState.Execute(this);
        }

       // Allows us to execute consecutive calls of GetComponent in O(1) time
        public new T GetComponent<T>() where T : Component
        {
            if(_cachedComponents.ContainsKey(typeof(T)))
                return _cachedComponents[typeof(T)] as T;

            var component = base.GetComponent<T>();
            if(component != null)
            {
                _cachedComponents.Add(typeof(T), component);
            }
            return component;
        }

    }
}

Next, under our FSM folder, let’s create:

FSMGraphA folder
BaseStateMachineGraphA C# class within FSMGraph

For the time being, BaseStateMachineGraph will inherit just the BaseStateMachine class:

using UnityEngine;

namespace Demo.FSM.Graph
{
    public class BaseStateMachineGraph : BaseStateMachine
    {
    }
}

We can’t add functionality to BaseStateMachineGraph until we create our base node type; let’s do that next.

Implementing NodeGraph and Creating a Base Node Type

Under our newly created FSMGraph folder, we’ll create:

FSMGraphA class

For now, FSMGraph will inherit just the NodeGraph class (with no added functionality):

using UnityEngine;
using XNode;

namespace Demo.FSM.Graph
{
    [CreateAssetMenu(menuName = "FSM/FSM Graph")]
    public class FSMGraph : NodeGraph
    {
    }
}

Before we create classes for our nodes, let’s add:

FSMNodeBaseA class to be used as a base class by all of our nodes

The FSMNodeBase class will contain an input named Entry of type FSMNodeBase to enable us to connect nodes to one another.

We will also add two helper functions:

GetFirstRetrieves the first node connected to the requested output
GetAllOnPortRetrieves all remaining nodes that connect to the requested output
using System.Collections.Generic;
using XNode;

namespace Demo.FSM.Graph
{
    public abstract class FSMNodeBase : Node
    {
        [Input(backingValue = ShowBackingValue.Never)] public FSMNodeBase Entry;

        protected IEnumerable<T> GetAllOnPort<T>(string fieldName) where T : FSMNodeBase
        {
            NodePort port = GetOutputPort(fieldName);
            for (var portIndex = 0; portIndex < port.ConnectionCount; portIndex++)
            {
                yield return port.GetConnection(portIndex).node as T;
            }
        }

        protected T GetFirst<T>(string fieldName) where T : FSMNodeBase
        {
            NodePort port = GetOutputPort(fieldName);
            if (port.ConnectionCount > 0)
                return port.GetConnection(0).node as T;
            return null;
        }
    }
} 

Ultimately, we’ll have two types of state nodes; let’s add a class to support these:

BaseStateNodeA base class to support both StateNode and RemainInStateNode
namespace Demo.FSM.Graph
{
    public abstract class BaseStateNode : FSMNodeBase
    {
    }
} 

Next, modify the BaseStateMachineGraph class:

using UnityEngine;
namespace Demo.FSM.Graph
{
    public class BaseStateMachineGraph : BaseStateMachine
    {
        public new BaseStateNode CurrentState { get; set; }
    }
}

Here, we’ve hidden the CurrentState property inherited from the base class and changed its type from BaseState to BaseStateNode.

Creating Building Blocks for Our FSM Graph

Now, to form our FSM’s main building blocks, let’s add three new classes to our FSMGraph folder:

StateNodeRepresents the state of an agent. On execute, StateNode iterates over the TransitionNodes connected to the output port of the StateNode (retrieved by a helper method). StateNode queries each one whether to transition the node to a different state or leave the node's state as is.
RemainInStateNodeIndicates a node should remain in the current state.
TransitionNodeMakes the decision to transition to a different state or stay in the same state.

In the previous Unity FSM tutorial, the State class iterates over the transitions list. Here in xNode, StateNode serves as State’s equivalent to iterate over the nodes retrieved via our GetAllOnPort helper method.

Now add an [Output] attribute to the outgoing connections (the transition nodes) to indicate that they should be part of the GUI. By xNode’s design, the attribute’s value originates in the source node: the node containing the field marked with the [Output] attribute. As we are using [Output] and [Input] attributes to describe relationships and connections that will be set by the xNode GUI, we can’t treat these values as we normally would. Consider how we iterate through Actions versus Transitions:

using System.Collections.Generic;
namespace Demo.FSM.Graph
{
    [CreateNodeMenu("State")]
    public sealed class StateNode : BaseStateNode 
    {
        public List<FSMAction> Actions;
        [Output] public List<TransitionNode> Transitions;
        public void Execute(BaseStateMachineGraph baseStateMachine)
        {
            foreach (var action in Actions)
                action.Execute(baseStateMachine);
            foreach (var transition in GetAllOnPort<TransitionNode>(nameof(Transitions)))
                transition.Execute(baseStateMachine);
        }
    }
}

In this case, the Transitions output can have multiple nodes attached to it; we have to call the GetAllOnPort helper method to obtain a list of the [Output] connections.

RemainInStateNode is, by far, our simplest class. Executing no logic, RemainInStateNode merely indicates to our agent—in our game’s case, the enemy—to remain in its current state:

namespace Demo.FSM.Graph
{
    [CreateNodeMenu("Remain In State")]
    public sealed class RemainInStateNode : BaseStateNode
    {
    }
}

At this point, the TransitionNode class is still incomplete and will not compile. The associated errors will clear once we update the class.

To build TransitionNode, we need to get around xNode’s requirement that the value of the output originates in the source node—as we did when we built StateNode. A major difference between StateNode and TransitionNode is that TransitionsNode’s output may attach to only one node. In our case, GetFirst will fetch the one node attached to each of our ports (one state node to transition to in the true case and another to transition to in the false case):

namespace Demo.FSM.Graph
{
    [CreateNodeMenu("Transition")]
    public sealed class TransitionNode : FSMNodeBase
    {
        public Decision Decision;
        [Output] public BaseStateNode TrueState;
        [Output] public BaseStateNode FalseState;
        public void Execute(BaseStateMachineGraph stateMachine)
        {
            var trueState = GetFirst<BaseStateNode>(nameof(TrueState));
            var falseState = GetFirst<BaseStateNode>(nameof(FalseState));
            var decision = Decision.Decide(stateMachine);
            if (decision && !(trueState is RemainInStateNode))
            {
                stateMachine.CurrentState = trueState;
            }
            else if(!decision && !(falseState is RemainInStateNode))
                stateMachine.CurrentState = falseState;
        }
    }
}

Let’s have a look at the graphical results from our code.

Creating the Visual Graph

Now, with all the FSM classes sorted out, we can proceed to create our FSM Graph for the game’s enemy agent. In the Unity project window, right-click the EnemyAI folder and choose: Create  > FSM  > FSM Graph. To make our graph easier to identify, let’s rename it EnemyGraph.

In the xNode Graph editor window, right-click to reveal a drop-down menu listing State, Transition, and RemainInState. If the window is not visible, double-click the EnemyGraph file to launch the xNode Graph editor window.

To create the Chase and Patrol states:

Right-click and choose State to create a new node.

Name the node Chase.

Return to the drop-down menu, choose State again to create a second node.

Name the node Patrol.

Drag and drop the existing Chase and Patrol actions to their newly created corresponding states.

To create the transition:

Right-click and choose Transition to create a new node.

Assign the LineOfSightDecision object to the transition’s Decision field.

To create the RemainInState node:

  1. Right-click and choose RemainInState to create a new node.

To connect the graph:

Connect the Patrol node’s Transitions output to the Transition node’s Entry input.

Connect the Transition node’s True State output to the Chase node’s Entry input.

Connect the Transition node’s False State output to the Remain In State node’s Entry input.

The graph should look like this:

 

Four nodes represented as four rectangles, each with Entry input circles on their top left side. From left to right, the Patrol state node displays one action: Patrol Action. The Patrol state node also includes a Transitions output circle on its bottom right side that connects to the Entry circle of the Transition node. The Transition node displays one decision: LineOfSight. It has two output circles on its bottom right side, True State and False State. True State connects to the Entry circle of our third structure, the Chase state node. The Chase state node displays one action: Chase Action. The Chase state node has a Transitions output circle. The second of Transition's two output circles, False State, connects to the Entry circle of our fourth and final structure, the RemainInState node (which appear below the Chase state node).

The Initial Look at Our FSM Graph

 

Nothing in the graph indicates which node—the Patrol or Chase state—is our initial node. The BaseStateMachineGraph class detects four nodes but, with no indicators present, cannot choose the initial state.

To resolve this issue, let’s create:

FSMInitialNodeA class whose single output of type StateNode is named InitialNode

Our output InitialNode denotes the initial state. Next, in FSMInitialNode, create:

NextNodeA property to enable us to fetch the node connected to the InitialNode output
using XNode;
namespace Demo.FSM.Graph
{
    [CreateNodeMenu("Initial Node"), NodeTint("#00ff52")]
    public class FSMInitialNode : Node
    {
        [Output] public StateNode InitialNode;
        public StateNode NextNode
        {
            get
            {
                var port = GetOutputPort("InitialNode");
                if (port == null || port.ConnectionCount == 0)
                    return null;
                return port.GetConnection(0).node as StateNode;
            }
        }
    }
}

Now that we created theFSMInitialNode class, we can connect it to the Entry input of the initial state and return the initial state via the NextNode property.

Let’s go back to our graph and add the initial node. In the xNode editor window:

  1. Right-click and choose Initial Node to create a new node.
  2. Attach FSM Node’s output to the Patrol node’s Entry input.

The graph should now look like this:

 

The same graph as in our previous image, with one added FSM Node green rectangle to the left of the other four rectangles. It has an Initial Node output (represented by a blue circle) that connects to the Patrol node's "Entry" input (represented by a dark red circle).

Our FSM Graph With the Initial Node Attached to the Patrol State

 

To make our lives easier, we’ll add to FSMGraph:

InitialStateA property

The first time we try to retrieve the InitialState property’s value, the getter of the property will traverse all nodes in our graph as it tries to find FSMInitialNode. Once FSMInitialNode is located, we use the NextNode property to find our initial state node:

using System.Linq;
using UnityEngine;
using XNode;
namespace Demo.FSM.Graph
{
    [CreateAssetMenu(menuName = "FSM/FSM Graph")]
    public sealed class FSMGraph : NodeGraph
    {
        private StateNode _initialState;
        public StateNode InitialState
        {
            get
            {
                if (_initialState == null)
                    _initialState = FindInitialStateNode();
                return _initialState;
            }
        }
        private StateNode FindInitialStateNode()
        {
            var initialNode = nodes.FirstOrDefault(x => x is FSMInitialNode);
            if (initialNode != null)
            {
                return (initialNode as FSMInitialNode).NextNode;
            }
            return null;
        }
    }
}

Now, in our BaseStateMachineGraph, let’s reference FSMGraph and override our BaseStateMachine’s Init and Execute methods. Overriding Init sets CurrentState as the graph’s initial state, and overriding Execute calls Execute on CurrentState:

using UnityEngine;
namespace Demo.FSM.Graph
{
    public class BaseStateMachineGraph : BaseStateMachine
    {
        [SerializeField] private FSMGraph _graph;
        public new BaseStateNode CurrentState { get; set; }
        public override void Init()
        {
            CurrentState = _graph.InitialState;
        }
        public override void Execute()
        {
            ((StateNode)CurrentState).Execute(this);
        }
    }
}

Now, let’s apply our graph to our Enemy object, and see it in action.

Testing the FSM Graph

In preparation for testing, in the Unity Editor’s Project window, we need to:

Open the SampleScene asset.

Locate our Enemy game object in the Unity hierarchy window.

Replace the BaseStateMachine component with the BaseStateMachineGraph component:

Click Add Component and select the correct BaseStateMachineGraph script.

Assign our FSM graph, EnemyGraph, to the Graph field of the BaseStateMachineGraph component.

Delete the BaseStateMachine component (as it is no longer needed) by right-clicking and selecting Remove Component.

Now the Enemy game object should look like this:

 

From top to bottom, in the Inspector screen, there is a check beside Enemy. "Player" is selected in the Tag drop-down, "Enemy" is selected in the Layer drop-down. The Transform drop-down shows position, rotation, and scale. The Capsule drop-down menu is compressed, and the Mesh Renderer, Capsule Collider, and Nav Mesh Agent drop-downs appear compressed with a check to their left. The Enemy Sight Sensor drop-down shows the Script and Ignore Mask. The PatrolPoints drop-down shows the Script and four PatrolPoints. There is a check mark beside the Base State Machine Graph (Script) drop-down. Script shows "BaseStateMachineGraph," Initial State shows "None (Base State), and Graph shows "EnemyGraph (FSM Graph)." Finally, the Blue Enemy (Material) drop-down is compressed, and an "Add Component" button appears below it.

Enemy Game Object

 

That’s it! Now we have a modular FSM with a graphic editor. When we click the Play button, we see our graphically created enemy AI works exactly as our previously created ScriptableObject enemy.

Forging Ahead: Optimizing Our FSM

The advantages of using a graphical editor are self-evident, but I’ll leave you with a word of caution: As you develop more sophisticated AI for your game, the number of states and transitions grows, and the FSM becomes confusing and difficult to read. The graphical editor grows to resemble a web of lines that originate in multiple states and terminate at multiple transitions—and vice versa, making our FSM difficult to debug.

As we did in the previous tutorial, we invite you to make the code your own, and leave the door open for you to optimize your stealth game and address these concerns. Imagine how helpful it would be to color-code your state nodes to indicate whether a node is active or inactive, or resize the RemainInState and Initial nodes to limit their screen real estate.

Such enhancements are not merely cosmetic. Color and size references would help us identify where and when to debug. A graph that is easy on the eye is also simpler to assess, analyze, and comprehend. Any next steps are up to you—with the foundation of our graphical editor in place, there’s no limit to the developer experience improvements you can make.

The editorial team of the Toptal Engineering Blog extends its gratitude to Goran Lalić and Maddie Douglas for reviewing the code samples and other technical content presented in this article.

Original article source at: https://www.toptal.com/

#node #tutorial 

An xNode-based Graphical FSM Tutorial
Dipesh Malvia

Dipesh Malvia

1669890624

Build Rest Api Project With Express & MongoDB | Express Router | Node.js Tutorial for Beginners #8

Build Rest Api Project With Express & MongoDB | Express Router | Node.js Tutorial for Beginners #8

In this video we will start building a contact management Rest API project using Express & MongoDb. And we will start with the project intro, express fundamentals and routing in detail. 

⭐️ Support my channel⭐️ https://www.buymeacoffee.com/dipeshmalvia

⭐️ GitHub link for Reference ⭐️ https://github.com/dmalvia/Express_MongoDB_Rest_API_Tutorial

⭐️ Node.js for beginners Playlist ⭐️ https://youtube.com/playlist?list=PLTP3E5bPW796_icZanMqhdg7i0Cl7Y51F

🔥 Video contents... ENJOY 👇 

  • 0:00:00 - Intro 
  • 0:00:33 - Project Introduction & Rest API convention 
  • 0:02:02 - Project Setup - Contact Management App 
  • 0:05:30 - Create an Express Server 
  • 0:07:30 - Thunder Client Setup 
  • 0:10:00 - Express Router & Contacts CRUD Route Setup 
  • 0:14:43 - Create Contact Controller for Contacts CRUD Operations 
  • 0:20:03 - Multiple HTTP Methods per Route 
  • 0:20:59 - Built-in Middleware for POST Request Body 
  • 0:23:42 - Express - Throw Error 
  • 0:24:33 - Outro 

⭐️ JavaScript ⭐️ 

🔗 Social Medias 🔗 

⭐️ Tags ⭐️ - Node.js, Express & MongoDB Project - Build Rest API Project Express & MongoDB - Express Routing Tutorial - Node.Js & Express Crash Course 

⭐️ Hashtags ⭐️ #nodejs #express #beginners #tutorial 

Disclaimer: It doesn't feel good to have a disclaimer in every video but this is how the world is right now. All videos are for educational purpose and use them wisely. Any video may have a slight mistake, please take decisions based on your research. This video is not forcing anything on you.

https://youtu.be/3qau30Waeyc

 

Build Rest Api Project With Express & MongoDB | Express Router | Node.js Tutorial for Beginners #8
Noah Saunders

Noah Saunders

1669796397

Learn JavaScript Reduce Function in 18 Minutes

Learn JavaScript Reduce Function in 18 minutes (for beginners). JavaScript’s reduce method is one of the cornerstones of functional programming. Let’s explore how it works, when you should use it, and some of the cool things it can do.
 

🕐 TIMESTAMPS:
00:00 Introduction
00:15 Skillshare Sponsorship
01:58 Reduce Function Lesson Intro
03:48 Reduce Function Example #1 
05:30 How to write a Reduce Function (2 Methods)
11:57 Reduce Function Example #2
16:20 Reduce Function Lesson Summary
17:42 Outro


A Guide To The Reduce Method In Javascript​

A Basic Reduction

Use it when: You have an array of amounts and you want to add them all up.

const euros = [29.76, 41.85, 46.5];

const sum = euros.reduce((total, amount) => total + amount); 

sum // 118.11

How to use it:

  • In this example, Reduce accepts two parameters, the total and the current amount.
  • The reduce method cycles through each number in the array much like it would in a for-loop.
  • When the loop starts the total value is the number on the far left (29.76) and the current amount is the one next to it (41.85).
  • In this particular example, we want to add the current amount to the total.
  • The calculation is repeated for each amount in the array, but each time the current value changes to the next number in the array, moving right.
  • When there are no more numbers left in the array the method returns the total value.

The ES5 version of the Reduce Method In JavaScript​

If you have never used ES6 syntax before, don’t let the example above intimidate you. It’s exactly the same as writing:

var euros = [29.76, 41.85, 46.5]; 

var sum = euros.reduce( function(total, amount){
  return total + amount
});

sum // 118.11

We use const instead of var and we replace the word function with a “fat arrow” (=>) after the parameters, and we omit the word ‘return’.

I’ll use ES6 syntax for the rest of the examples, since it’s more concise and leaves less room for errors.

Finding an Average with the Reduce Method In JavaScript​

Instead of logging the sum, you could divide the sum by the length of the array before you return a final value.

The way to do this is by taking advantage of the other arguments in the reduce method. The first of those arguments is the index. Much like a for-loop, the index refers to the number of times the reducer has looped over the array. The last argument is the array itself.

const euros = [29.76, 41.85, 46.5];

const average = euros.reduce((total, amount, index, array) => {
  total += amount;
  if( index === array.length-1) { 
    return total/array.length;
  }else { 
    return total;
  }
});

average // 39.37

Map and Filter as Reductions

If you can use the reduce function to spit out an average then you can use it any way you want.

For example, you could double the total, or half each number before adding them together, or use an if statement inside the reducer to only add numbers that are greater than 10. My point is that the Reduce Method In JavaScript​ gives you a mini CodePen where you can write whatever logic you want. It will repeat the logic for each amount in the array and then return a single value.

The thing is, you don’t always have to return a single value. You can reduce an array into a new array.

For instance, lets reduce an array of amounts into another array where every amount is doubled. To do this we need to set the initial value for our accumulator to an empty array.

The initial value is the value of the total parameter when the reduction starts. You set the initial value by adding a comma followed by your initial value inside the parentheses but after the curly braces (bolded in the example below).

const average = euros.reduce((total, amount, index, array) => {
  total += amount
  return total/array.length
}, 0);

In previous examples, the initial value was zero so I omitted it. By omitting the initial value, the total will default to the first amount in the array.

By setting the initial value to an empty array we can then push each amount into the total. If we want to reduce an array of values into another array where every value is doubled, we need to push the amount * 2. Then we return the total when there are no more amounts to push.

const euros = [29.76, 41.85, 46.5];

const doubled = euros.reduce((total, amount) => {
  total.push(amount * 2);
  return total;
}, []);

doubled // [59.52, 83.7, 93]

We’ve created a new array where every amount is doubled. We could also filter out numbers we don’t want to double by adding an if statement inside our reducer.

const euro = [29.76, 41.85, 46.5];

const above30 = euro.reduce((total, amount) => {
  if (amount > 30) {
    total.push(amount);
  }
  return total;
}, []);

above30 // [ 41.85, 46.5 ]

These operations are the map and filter methods rewritten as a reduce method.

For these examples, it would make more sense to use map or filter because they are simpler to use. The benefit of using reduce comes into play when you want to map and filter together and you have a lot of data to go over.

If you chain map and filter together you are doing the work twice. You filter every single value and then you map the remaining values. With reduce you can filter and then map in a single pass.

Use map and filter but when you start chaining lots of methods together you now know that it is faster to reduce the data instead.

Creating a Tally with the Reduce Method In JavaScript​

Use it when: You have a collection of items and you want to know how many of each item are in the collection.

const fruitBasket = ['banana', 'cherry', 'orange', 'apple', 'cherry', 'orange', 'apple', 'banana', 'cherry', 'orange', 'fig' ];

const count = fruitBasket.reduce( (tally, fruit) => {
  tally[fruit] = (tally[fruit] || 0) + 1 ;
  return tally;
} , {})

count // { banana: 2, cherry: 3, orange: 3, apple: 2, fig: 1 }

To tally items in an array our initial value must be an empty object, not an empty array like it was in the last example.

Since we are going to be returning an object we can now store key-value pairs in the total.

fruitBasket.reduce( (tally, fruit) => {
  tally[fruit] = 1;
  return tally;
}, {})

On our first pass, we want the name of the first key to be our current value and we want to give it a value of 1.

This gives us an object with all the fruit as keys, each with a value of 1. We want the amount of each fruit to increase if they repeat.

To do this, on our second loop we check if our total contain a key with the current fruit of the reducer. If it doesn’t then we create it. If it does then we increment the amount by one.

fruitBasket.reduce((tally, fruit) => {
  if (!tally[fruit]) {
    tally[fruit] = 1;
  } else {
    tally[fruit] = tally[fruit] + 1;
  }
  return tally;
}, {});

I rewrote the exact same logic in a more concise way up top.

Flattening an array of arrays with the Reduce Method In JavaScript​​

We can use reduce to flatten nested amounts into a single array.

We set the initial value to an empty array and then concatenate the current value to the total.

const data = [[1, 2, 3], [4, 5, 6], [7, 8, 9]];

const flat = data.reduce((total, amount) => {
  return total.concat(amount);
}, []);

flat // [ 1, 2, 3, 4, 5, 6, 7, 8, 9 ]

More often than not, information is nested in more complicated ways. For instance, lets say we just want all the colors in the data variable below.

const data = [
  {a: 'happy', b: 'robin', c: ['blue','green']}, 
  {a: 'tired', b: 'panther', c: ['green','black','orange','blue']}, 
  {a: 'sad', b: 'goldfish', c: ['green','red']}
];

We’re going to step through each object and pull out the colours. We do this by pointing amount.c for each object in the array. We then use a forEach loop to push every value in the nested array into out total.

const colors = data.reduce((total, amount) => {
  amount.c.forEach( color => {
      total.push(color);
  })
  return total;
}, [])

colors //['blue','green','green','black','orange','blue','green','red']

If we only need unique number then we can check to see of the number already exists in total before we push it.

const uniqueColors = data.reduce((total, amount) => {
  amount.c.forEach( color => {
    if (total.indexOf(color) === -1){
     total.push(color);
    }
  });
  return total;
}, []);

uniqueColors // [ 'blue', 'red', 'green', 'black', 'orange']

Piping with Reduce

An interesting aspect of the reduce method in JavaScript is that you can reduce over functions as well as numbers and strings.

Let’s say we have a collection of simple mathematical functions. these functions allow us to increment, decrement, double and halve an amount.

function increment(input) { return input + 1;}

function decrement(input) { return input — 1; }

function double(input) { return input * 2; }

function halve(input) { return input / 2; }

For whatever reason, we need to increment, then double, then decrement an amount.

You could write a function that takes an input, and returns (input + 1) * 2 -1. The problem is that we know we are going to need to increment the amount three times, then double it, then decrement it, and then halve it at some point in the future. We don’t want to have to rewrite our function every time so we going to use reduce to create a pipeline.

A pipeline is a term used for a list of functions that transform some initial value into a final value. Our pipeline will consist of our three functions in the order that we want to use them.

let pipeline = [increment, double, decrement];

Instead of reducing an array of values we reduce over our pipeline of functions. This works because we set the initial value as the amount we want to transform.

const result = pipeline.reduce(function(total, func) {
  return func(total);
}, 1);

result // 3

Because the pipeline is an array, it can be easily modified. If we want to decrement something three times, then double it, decrement it , and halve it then we just alter the pipeline.

var pipeline = [

  increment,
  
  increment,
  
  increment,
  
  double,
  
  decrement,
  
  halve
  
];

The reduce function stays exactly the same.

Silly Mistakes to avoid

If you don’t pass in an initial value, reduce will assume the first item in your array is your initial value. This worked fine in the first few examples because we were adding up a list of numbers.

If you’re trying to tally up fruit, and you leave out the initial value then things get weird. Not entering an initial value is an easy mistake to make and one of the first things you should check when debugging.

Another common mistake is to forget to return the total. You must return something for the reduce function to work. Always double check and make sure that you’re actually returning the value you want.

Tools, Tips & References

  • Everything in this post came from a fantastic video series on egghead called Introducing Reduce. I give Mykola Bilokonsky full credit and I am grateful to him for everything I now know about using the Reduce Method In JavaScript​. I have tried to rewrite much of what he explains in my own words as an exercise to better understand each concept. Also, it’s easier for me to reference an article, as opposed to a video, when I need to remember how to do something.
  • The MDN Reduce documentation labels what I called a total the accumulator. It is important to know this because most people will refer to it as an accumulator if you read about it online. Some people call it prev as in previous value. It all refers to the same thing. I found it easier to think of a total when I was learning reduce.
  • If you would like to practice using reduce I recommend signing up to freeCodeCamp and completing as many of the intermediate algorithms as you can using reduce.
  • If the ‘const’ variables in the example snippets are new to you I wrote another article about ES6 variables and why you might want to use them.
  • I also wrote an article called The Trouble With Loops that explain how to use map() and filter() if the are new to you.

Thanks for reading!

#javascript #tutorial #beginner #reduce

Learn JavaScript Reduce Function in 18 Minutes
Nat  Grady

Nat Grady

1669357140

How to Efficient Management of States in React

React Redux Tutorial – Efficient Management of States in React

React Redux Tutorial

React is one of the most popular JavaScript libraries which is used for front-end development. It has made our application development easier and faster by providing a component-based approach.

As you might know, it’s not the complete framework but just the view part of the MVC (Model-View-Controller) framework. So, how do you keep track of the data and handle the events in the applications developed using React? Well, this is where Redux comes as a savior and handles the data flow of the application from the backend.

Through this blog on React Redux tutorial, I will explain everything you need to know on how to integrate Redux with React applications. Below are the topics I will be discussing under React Redux tutorial:

  • Why Redux With React?
  • What Is Redux?
  • Advantages Of Redux

Components Of Redux

React with Redux

Why Redux With React? – React Redux Tutorial

component based approach - React Redux Tutorial - Edureka

As I have already mentioned that React follows the component-based approach, where the data flows through the components. In fact, the data in React always flows from parent to child components which makes it unidirectional. This surely keeps our data organized and helps us in controlling the application better. Because of this, the application’s state is contained in specific stores and as a result, the rest of the components remain loosely coupled. This makes our application more flexible leading to increased efficiency. That’s why communication from a parent component to a child component is convenient.

component based approach - React Redux Tutorial - Edureka

But what happens when we try to communicate from a non-parent component?

A child component can never pass data back up to the parent component. React does not provide any way for direct component-to-component communication. Even though React has features to support this approach, it is considered to be a poor practice. It is prone to errors and leads to spaghetti code. So, how can two non-parent components pass data to each other?

This is where React fails to provide a solution and Redux comes into the picture.

redux data flow - React Redux Tutorial - Edureka

Redux provides a “store” as a solution to this problem. A store is a place where you can store all your application state together. Now the components can “dispatch” state changes to the store and not directly to the other components. Then the components that need the updates about the state changes can “subscribe” to the store.

Thus, with Redux, it becomes clear where the components get their state from as well as where should they send their states to. Now the component initiating the change does not have to worry about the list of components needing the state change and can simply dispatch the change to the store. This is how Redux makes the data flow easier.

What Is Redux? – React Redux Tutorial

Just like React, Redux is also a library which is used widely for front-end development. It is basically a tool for managing both data-state and UI-state in JavaScript applications. Redux separates the application data and business logic into its own container in order to let React manage just the view. Rather than a traditional library or a framework, it’s an application data-flow architecture. It is most compatible with Single Page Applications (SPAs) where the management of the states over time can get complex. Check out this Full Stack developer course today to learn about React redux.

Redux was created by Dan Abramov and Andrew Clark around June 2015. It was inspired by Facebook’s Flux and influenced by functional programming language Elm. Redux got popular very quickly because of its simplicity, small size (only 2 KB) and great documentation.

 

Principles Of Redux

Redux follows three fundamental principles:

Single source of truth: The state of the entire application is stored in an object/ state tree within a single store. The single state tree makes it easier to keep track of the changes over time and debug or inspect the application. For a faster development cycle, it helps to persist the application’s state in development.

State is read-only: The only way to change the state is to trigger an action. An action is a plain JS object describing the change. Just like the state is the minimal representation of data, the action is the minimal representation of the change to that data. An action must have a type property (conventionally a String constant). All the changes are centralized and occur one by one in a strict order.

Changes are made with pure functions: In order to specify how the state tree is transformed by actions, you need pure functions. Pure functions are those whose return values depend solely on the values of their arguments. Reducers are just pure functions that take the previous state and an action and return the next state. You can have a single reducer in your application and as it grows, you can split it off into smaller reducers. These smaller reducers will then manage specific parts of the state tree.

Advantages Of Redux – React Redux Tutorial

Following are some of the major advantages of Redux:

  • Predictability of outcome – Since there is always one source of truth, i.e. the store, there is no confusion about how to sync the current state with actions and other parts of the application.
  • Maintainability – The code becomes easier to maintain with a predictable outcome and strict structure.
  • Server side rendering – You just need to pass the store that is created on the server, to the client side. This is very useful for initial render and provides a better user experience as it optimizes the application performance.
  • Developer tools – From actions to state changes, developers can track everything going on in the application in real time.
  • Community and ecosystem – Redux has a huge community behind it which makes it even more captivating to use. A large community of talented individuals contribute to the betterment of the library and develop various applications with it.
  • Ease of testing – Redux code are mostly functions which are small, pure and isolated. This makes the code testable and independent.
  • Organization – Redux is very precise about how the code should be organized, this makes the code more consistent and easier when a team works with it.

Components Of Redux – React Redux Tutorial

Redux has four components.

  1. Action
  2. Reducer
  3. Store
  4. View

Let us discuss them in detail:

Action – The only way to change state content is by emitting an action. Actions are the plain JavaScript objects which are the main source of information used to send data (user interactions, internal events such as API calls, and form submissions) from the application to the store. The store receives information only from the actions. You have to send the actions to the store using store.dispatch().
Internal actions are simple JavaScript objects that have a type property (usually String constant), describing the type of action and the entire information being sent to the store.

{
    type: ADD_TODO,
    text
}

Actions are created using action creators which are the normal functions that return actions.

function addTodo(text) {
    return {
        type: ADD_TODO,
        text
    }
}

To call actions anywhere in the app, use dispatch()method:

dispatch(addTodo(text));

Reducer – Actions describe the fact that something happened, but don’t specify how the application’s state changes in response. This is the job of reducers. It is based on the array reduce method, where it accepts a callback (reducer) and lets you get a single value out of multiple values, sums of integers, or an accumulation of streams of values. In Redux, reducers are functions (pure) that take the current state of the application and an action and then return a new state. Understanding how reducers work is important because they perform most of the work. 

function reducer(state = initialState, action) {
    switch (action.type) {
        case ADD_TODO:
            return Object.assign({}, state,
                { todos: [ ...state.todos,
                    {
                        text: action.text,
                        completed: false
                    }
                    ]
                })
        default:
            return state
    }
}

Store – A store is a JavaScript object which can hold the application’s state and provide a few helper methods to access the state, dispatch actions and register listeners. The entire state/ object tree of an application is saved in a single store. As a result of this, Redux is very simple and predictable. We can pass middleware to the store to handle the processing of data as well as to keep a log of various actions that change the state of stores. All the actions return a new state via reducers.

import { createStore } from 'redux'
import todoApp from './reducers'
 
let store = createStore(reducer);
  • View – Smart and dumb components together build up the view. The only purpose of the view is to display the date passed down by the store. The smart components are in charge of the actions. The dumb components underneath the smart components notify them in case they need to trigger the action. The smart components, in turn, pass down the props which the dumb components treat as callback actions.

Following is a diagram which shows how the data actually flows through all the above-described components in Redux.

Data Flow in Redux - React Redux Tutorial - Edureka

React With Redux  – React Redux Tutorial

Now that you are familiar with Redux and its components, let’s now see how you can integrate it with a React application.


STEP 1: You need to setup the basic react, webpack, babel setup. Following are the dependencies we are using in this application.

"dependencies": {
  "babel-core": "^6.10.4",
  "babel-loader": "^6.2.4",
  "babel-polyfill": "^6.9.1",
  "babel-preset-es2015": "^6.9.0",
  "babel-preset-react": "^6.11.1",
  "babel-register": "^6.9.0",
  "cross-env": "^1.0.8",
  "css-loader": "^0.23.1",
  "expect": "^1.20.1",
  "node-libs-browser": "^1.0.0",
  "node-sass": "^3.8.0",
  "react": "^15.1.0",
  "react-addons-test-utils": "^15.1.0",
  "react-dom": "^15.1.0",
  "react-redux": "^4.4.5",
  "redux": "^3.5.2",
  "redux-logger": "^2.6.1",
  "redux-promise": "^0.5.3",
  "redux-thunk": "^2.1.0",
  "sass-loader": "^4.0.0",
  "style-loader": "^0.13.1",
  "webpack": "^1.13.1",
  "webpack-dev-middleware": "^1.6.1",
  "webpack-dev-server": "^1.14.1",
  "webpack-hot-middleware": "^2.11.0"
},

STEP 2: Once you are done with installing the dependencies, then create a components folder in src folder. Within that create App.js file.

import React from 'react';
import UserList from '../containers/user-list';
import UserDetails from '../containers/user-detail';
require('../../scss/style.scss');
 
const App = () => (
    <div>
        <h2>User List</h2>
        <UserList />
        <hr />
        <h2>User Details</h2>
        <UserDetails />
    </div>
);
 
export default App;

STEP 3: Next create a new actions folder and create index.js in it.

export const selectUser = (user) => {
    console.log("You clicked on user: ", user.first);
    return {
        type: 'USER_SELECTED',
        payload: user
    }
};

STEP 4: Now create user-details.js in a new folder called containers.

import React, {Component} from 'react';
import {connect} from 'react-redux';
 
class UserDetail extends Component {
    render() {
        if (!this.props.user) {
            return (<div>Select a user...</div>);
        }
        return (
            <div>
                <img height="150" width="150" src={this.props.user.thumbnail} />
                <h2>{this.props.user.first} {this.props.user.last}</h2>
                <h3>Age: {this.props.user.age}</h3>
                <h3>Description: {this.props.user.description}</h3>
            </div>
        );
    }
}
 
function mapStateToProps(state) {
    return {
        user: state.activeUser
    };
}
 
export default connect(mapStateToProps)(UserDetail);

STEP 5: Inside the same folder create user-list.js file.

import React, {Component} from 'react';
import {bindActionCreators} from 'redux';
import {connect} from 'react-redux';
import {selectUser} from '../actions/index'
class UserList extends Component {
    renderList() {
        return this.props.users.map((user) => {
            return (
                <li key={user.id}
                    onClick={() => this.props.selectUser(user)}
                >
                    {user.first} {user.last}
                </li>
            );
        });
    }
    render() {
        return (
            <ul>
                {this.renderList()}
            </ul>
        );
    }
}
function mapStateToProps(state) {
    return {
        users: state.users
    };
}
function matchDispatchToProps(dispatch){
    return bindActionCreators({selectUser: selectUser}, dispatch);
}
export default connect(mapStateToProps, matchDispatchToProps)(UserList);

STEP 6: Now create reducers folder and create index.js within it.

import {combineReducers} from 'redux';
import UserReducer from './reducer-users';
import ActiveUserReducer from './reducer-active-user';
 
const allReducers = combineReducers({
    users: UserReducer,
    activeUser: ActiveUserReducer
});
export default allReducers

STEP 7: Within the same reducers folder, create reducer-users.js file.

export default function () {
    return [
        {
            id: 1,
            first: "Maxx",
            last: "Flinn",
            age: 17,
            description: "Loves basketball",
            thumbnail: "<a href="https://goo.gl/1KNpiy">https://goo.gl/1KNpiy</a>"
        },
        {
            id: 2,
            first: "Allen",
            last: "Matt",
            age: 25,
            description: "Food Junky.",
            thumbnail: "<a href="https://goo.gl/rNLgwv">https://goo.gl/rNLgwv</a>"
        },
        {
            id: 3,
            first: "Kris",
            last: "Chen",
            age: 23,
            description: "Music Lover.",
            thumbnail: "<a href="https://goo.gl/EVbPHb">https://goo.gl/EVbPHb</a>"
        }
    ]
}

STEP 8: Now within reducers folder create a reducer-active-user.js file.

export default function (state = null, action) {
    switch (action.type) {
        case 'USER_SELECTED':
            return action.payload;
            break;
    }
    return state;
}

STEP 9: Now you need to create index.js in the root folder.

import 'babel-polyfill';
import React from 'react';
import ReactDOM from "react-dom";
import {Provider} from 'react-redux';
import {createStore, applyMiddleware} from 'redux';
import thunk from 'redux-thunk';
import promise from 'redux-promise';
import createLogger from 'redux-logger';
import allReducers from './reducers';
import App from './components/App';
 
const logger = createLogger();
const store = createStore(
    allReducers,
    applyMiddleware(thunk, promise, logger)
);
 
ReactDOM.render(
    <Provider store={store}>
        <App />
    </Provider>,
    document.getElementById('root')
);

STEP 10: Now that you are done writing the codes, launch your application at localhost:3000.

This brings us to the end of the blog on React Redux tutorial. I hope through this React Redux tutorial blog I was able to clearly explain what is Redux, its components, and why we use it with React. You can refer to this blog on ReactJS Tutorial, in case you want to learn more about React.

If you want to get trained in React and wish to develop interesting UI’s on your own, then check out the React JS Certification or Web Development Certification Training by Edureka, a trusted online learning company with a network of more than 250,000 satisfied learners spread across the globe.

Got a question for us? Please mention it in the comments section and we will get back to you.

Original article source at: https://www.edureka.co/

#react #redux #tutorial 

How to Efficient Management of States in React
Dipesh Malvia

Dipesh Malvia

1669288987

Build React CRUD Admin panel with Ant Design | Refine Tutorial | React Admin Crash Course

Build React CRUD Admin panel with Ant Design | Refine Tutorial | React Admin Crash Course

In this video we will build a React admin panel for a Content Management system app. We will learn how to consume Rest API and add CRUD functionality using Refine which is a react based framework and we will also use Ant design components with refine for designing our admin panel.

⭐️ Refine - React Framework⭐️ Refine is a 100% open-source, headless React framework for CRUD apps, So you can quickly build internal tools, admin panels, and dashboards while remaining flexible. 

GitHub: https://github.com/refinedev/refine

⭐️ Support my channel⭐️ https://www.buymeacoffee.com/dipeshmalvia

⭐️ Tutorial reference links⭐️ 

🔥 Video contents... ENJOY 👇 

  • 0:00:00 - Intro
  • 0:00:24 - Project Demo 
  • 0:02:44 - Refine Overview 
  • 0:04:04 - Project Setup with Superplate-cli 
  • 0:05:50 - Bootstrapping the Application 
  • 0:08:40 - Fake Rest API 
  • 0:09:19 - Adding Resources 
  • 0:11:01 - Create pages & Interfaces 
  • 0:14:46 - Creating a List Page 
  • 0:17:04 - Handling Relationships 
  • 0:20:40 - Adding Search & Filters 
  • 0:23:19 - Showing a Single Record 
  • 0:25:50 - Editing a Record 
  • 0:27:56 - Creating a Record 
  • 0:29:11 - Deleting a Record 
  • 0:31:10 - Outro 

⭐️ React Roadmap for Developers ⭐️ 

⭐️ JavaScript ⭐️ 

🔗 Social Medias 🔗 

⭐️ Tags ⭐️ - React CRUD Admin Panel - Build React Admin App From Scratch - React CRUD Admin Panel Tutorial - How to Build Admin Panel in React.js 

⭐️ Hashtags ⭐️ #react #admin #beginners #tutorial 

Disclaimer: It doesn't feel good to have a disclaimer in every video but this is how the world is right now. All videos are for educational purposes and use them wisely. Any video may have a slight mistake, please make decisions based on your research. This video is not forcing anything on you.

 

https://youtu.be/eDcxcTSQJaA

Build React CRUD Admin panel with Ant Design | Refine Tutorial | React Admin Crash Course

Power BI Tutorial for Beginners

Power BI Tutorial: A Step by Step Guide with Examples

The concept of Business Intelligence is something that is alien to very few people these days. With newer tools emerging every day to help solve the crisis of data management, most organizations have already moved in or have plans to use Business Intelligence in solving their crisis. Power BI is Microsoft’s latest BI tool mainly aimed to help everyone analyze and visualize their data. This Power BI tutorial for beginners will give you a complete insight into Power BI in the following sequence:

  1. What Is Business Intelligence And Why Do We Need It?
  2. What Is Data Visualization And Its Importance?
  3. Need For Power BI
  4. What Is Power BI?
  5. Components Of Power BI
  6. Architecture Of Power BI
  7. Features of PowerBI
  8. Data Sources in Power BI
  9. Companies using Power BI
  10. Steps for Installing Power BI
  11. Building Blocks Of Power BI
  12. Creating A Report Using Power BI
  13. Power BI Use Case: Wirepas
  14. Difference between Power BI and Tableau
  15. Difference between Power BI and SSRS
  16. Difference between Power BI and MSBI
  17. History of Power BI
  18. Pros and cons of Power BI
  19. Power BI Tools
  20. Who uses Power BI
  21. Key Terms used in Power BI

You may go through this Microsoft Power BI recording where our Power BI Certification Training expert has explained the topics in a detailed manner with examples that will help you to understand the concepts better.

Power BI Tutorial for Beginners

Let us begin this Power BI tutorial by addressing the most essential and fundamental question, what exactly is Business Intelligence?

What Is Business Intelligence (BI)?

In an age where Business Intelligence has become a bigger domain than most trending technologies if you ask twenty people what the term business intelligence means, you are likely to get ten different answers. So let me put it in the simplest terms without losing the technicality of it. Business intelligence (BI) is the set of techniques and tools for the transformation of raw data into meaningful and useful information for business analysis. To put it simply, Business intelligence is the technology that gets the right data to the right people, at the right time so that they can make more effective business decisions.

The image below shows the benefits of Business Intelligence. 

BI - Power BI Tutorial - Edureka

Over the years, the process of business intelligence has grown and adapted to help solve almost all the challenges while dealing with data by involving newer tools and techniques. The change that Business Intelligence has seen over the years can be divided into 3 waves, so let us continue with our Power BI tutorial and take a look at these three waves.

1st Wave: Technical (IT To End User)

During the first wave of business intelligence, the end-user had to be dependent on the IT department for data insights. This is because it was not possible for end-users to create visualizations/ reports on their own as tools available required technical knowledge. This dependence on the IT department for insights resulted in more effort and time consumption to get the updates done.

2nd Wave: Self-Service (Analyst To End User)

The second wave gave analysts access to BI. Now, people with some knowledge of analytics could use the BI tools. This meant more teams had access to BI and more people could have better data insights, this eased the role of IT teams.

3rd Wave: Everyone (End User)

The third wave has made it easier to access data and create reports, visuals to get better business insights. The introduction of tools like Power BI made this transition easy. Now anybody who has basic understanding of the data can create reports to build intuitive and shareable dashboards.

This was about BI, now let us continue with our Power BI tutorial and understand another important topic that is associated with  BI.

What Is Data Visualization And Its Importance?

Data-Visualisation - PowerBI Tutorial - EdurekaIn a nutshell, data visualization is nothing but the pictorial or graphical representation of information/ data. It provides insights into complex data sets by communicating the key aspects in more intuitive and meaningful ways. Data visualization lies at the intersection of design, communication, and information science.

Even though data visualization has been termed as the key skill for research in the twenty-first century, it goes way back. It existed in the late 18th century and can be traced back to when William Playfair invented geometrical charts. His bar charts were used to represent Scotland’s imports and exports of 17 countries in 1781. These bar charts constituted a pure solution to the problem of discrete quantitative comparison.

Why Is Data Visualization Important?

The way, human brain processes information, it is easier to use images, charts, or graphs to understand and to visualize large amounts of complex data than to go through spreadsheets or reports. Take any image, for example, we all know the phrase ‘An image is worth a thousand words. This is completely true because images aren’t just a mere collection of pixels, they also hold a lot of information. This information in visual form is easy to understand than reading the same facts in text form.

Data visualization is a quick and easy way to convey concepts or information in a universal manner. Data visualization can help to:

  • Identify key areas and hidden patterns.
  • Get factors that give better customer insights.
  • Analyze and associate data and products properly.
  • Make proper predictions.

This was about data visualization. Next, in this Power BI tutorial, we would see why is Power BI important.

Need For Power BI

The following points make Power BI one of the prominent tools for data visualization. This Power BI tutorial would be incomplete without understanding these points.

  • Spot trends in real-time: Traditional BI tools like Tableau or Qlikview restrict you to historical analysis. By using Power BI you can access real-time information so you can identify trends early. By doing so, you can identify issues and improve performance. 
  • Automatically search hidden insights: With Power BI, you can auto search data sets for hidden insights in seconds with Quick Insights. Users can simply ask questions and Power BI Q&A will answer their questions with immediate effect.
  • Custom visualizations: With Custom visuals, Power BI allows you to visualize data in almost every possible way you can imagine. Thus you are not limited to something that lies in the box.
  • Enterprise-ready: With Power BI and Power BI Desktop, you can securely connect to your own on-premises data sources. With the On-premises Data Gateway, you can connect live to your SQL Server and other data sources. It gives secure, scalable, and reliable enterprise-grade information technology.

The above-mentioned reasons make Power BI very important in the context of data visualization. Let us continue with this Power BI tutorial for Beginners and understand What is Power BI.

What Is Power BI?

Power BI, well this name has been in the BI market for quite a long time. Microsoft team has worked for a long time to build a big umbrella called Power BI, this umbrella is a combination of a strong visualization, data analysis, and a cloud-based tool.

To define it, Power BI is a business analytics service provided by Microsoft. It provides interactive visualizations with self-service business intelligence capabilities, where end users can create reports and dashboards by themselves, without having to depend on information technology staff or database administrators.

Power BI also gives you cloud-based BI services, known as “Power BI Services”, along with a desktop-based interface, called “Power BI Desktop”. It offers data warehouse capabilities, including data preparation, data discovery, and interactive dashboards. In March 2016, Microsoft released an additional service called Power BI Embedded on its Azure cloud platform which enables the user to analyze data easily, perform various ETL operations and deliver reports with Power BI.

Power BI gateways let you connect with SQL Server databases, Analytical Services, and many other data sources to your dashboard in Power BI and reporting portals, embed Power BI reports and dashboards to give you a unified experience. The image below shows Power BI’s general workflow.

Workflow - Power BI Tutorial - EdurekaNow that we have understood what Power BI is, let us try and understand its important components in the next topic of this Power BI tutorial.

Components Of Power BI

Power BI has the following components: 

components - Power BI Tutorial - Edureka

  • Power Query: It can be used to search, access, and transform public and/ or internal data sources.
  • Power Pivot: It is used in data modeling for in-memory analytics.
  • Power View: You can analyze, visualize and display data as an interactive data visualization using Power View.
  • Power Map: It brings data to life with interactive geographical visualization.
  • Power BI Service: You can share data views and workbooks which are refresh-able from on-premises and cloud based data sources.
  • Power BI Q&A: Ask questions and get immediate answers with natural language query.
  • Data Management Gateway: By using this component you get periodic data refreshers, expose tables and view data feeds.
  • Data Catalog: User can easily discover and reuse queries using Data Catalog. Metadata can be facilitated for search functionality.

Now that we have seen the above mentioned components. Let us continue with this Power BI tutorial and understand Power BI’s architecture. 

Architecture Of Power BI

The following image shows Power BI’s architecture.

ARCHITECTURE - Power BI Tutorial - Edureka

Power BI’s architecture has three phases. The first two phases partially use ETL (Extract, Transform and Load) to handle the data. Let us take a look at these phases one by one:

1. Data Integration

An organisation can be required to deal with data that comes from different sources. The data from data sources can be in different file formats. The data is first extracted from different sources which can be your different servers or databases etc. This data is then integrated in a standard format and then stored at a common area called as staging area.

2. Data Processing

The integrated data is still not ready for visualization because the data needs processing before it can be presented. This data is pre-processed or cleaned. For example, missing values or redundant values are removed from the data set. After the data is cleaned, business rules are applied to the data and it is transformed into presentable data. This data is then loaded into the Data Warehouse.

3. Data Presentation

So once the data is loaded and processed now it can be visualized much better with use of various visualizations that Power BI has to offer. Use of reports, dashboards help one represent data in more intuitive manner. These visuals, reports help business end users to take business decisions based on the insights. 

Features of Power BI

  • Visualizations:

Power BI offers the functionality to visually represent our data or a subset of it so that it can be used to draw inferences or gain a deeper understanding of the data. These visuals can be bar graphs, pie charts, etc. Following are some examples of basic visual options provided in Power BI-

  1. Card – It is used to represent a single value such as Total Sales, etc.
  2. Stacked bar/column chart – they combine a line chart( which joins points representing some values with a line) and a bar/column chart(which represents a value against the purpose and other optional fields).
  3. Waterfall chart – It represents a continuously changing value where increase or decrease in value may be represented by differently colored bars.
  4. Pie chart– it represents the fractional value of each category of a particular field.
  5. Map-It is used to represent different information on a map.
  6. KPI-It represents the continuous progress made towards a target.
  7. Slicer – A slicer has options representing different categories of a field. Selecting that category shows only the information specific to that category in other visuals.
  8. Table – A table represents data in tabular form, i.e rows, and columns.

The following is an example of 4 basic visuals(Slicer, table, pie chart and stacked column chart) created using Power BI.

Apart from these basic visuals, there are options of obtaining more visuals as well. By clicking on the ‘Get more visuals’ option we obtain the following options-

Custom visual files– Custom visuals can be coded and stored in files with .pbiviz extension. This option enables users to import such visuals.

 Organisational visuals– This option can be used to import visuals specific to the user’s organization.

 Marketplace visuals-It is used to import visuals from Microsoft and its fellow community members.

  • Sourcing Varied Datasets

Datasets in Power BI can be sourced from a  variety of sources.

Some common examples of data sources are-

 Excel

Power BI datasets

Power BI dataflows

SQL Server

MySQL database

Analysis Services

Azure

Text/CSV

Oracle

PDF

Access

XML

JSON

  • Datasets Filtration

While sourcing the data, instead of importing the entire dataset, the user can source a subset of it. This subset may be as per the user requirement. Data may be integrated with Excel, SQL database, Azure, Facebook, MailChimp, etc.

Data can be sourced from either a single source or from more than one source. The following is an example of a dataset sourced in Power BI-

Click on Transform data.

The user can choose the rows or columns as required by him and thus create the desired subset. This selection can be based on a condition such as selecting rows containing values for a particular field in a specific range.

The following image shows a filter applied to the Pclass field in the above dataset.


After applying the filter it shows only the rows belonging to Pclass 2 and 3.

  • Reports:

A collection of visualizations relevant to a particular topic in Power BI form a dashboard. A combination of these dashboards forms a report.  A report contains visuals related to a particular topic. The user may add any number of pages in the report. Each page is a single screen containing the visuals. The pages can be arranged in the order as required by the user.

The image below shows a sample report.

  • Dashboards:

All the visuals appearing on a single Power BI page form a dashboard. It is a single page in a report. The visuals can be arranged in any order or position. Since it is a single page a dashboard generally contains only the most important or relevant visuals. Each dashboard can be shared with other users as well.

  • Flexible Tiles

In Power BI, a tile is a single visualization found in a report or on a dashboard. A tile can be thought of as a square or rectangular boundary containing a single visual.

The height and width of each tile are adjustable. The order or position of each tile on the dashboard is adjustable as well.

 

  • Navigation Pane

The Navigation pane is present on the top of the Power BI screen. It has the following tabs-

  •         File
  •         Home
  •         Insert
  •         Modelling
  •         View
  •         Help

There is a range of options in each tab to work with.

8.     Q & A Box

Click on the Q&A button in the Insert tab. The Q & A question box is available where users can type any question related to the data in natural language. Power BI will automatically try to auto complete the question using techniques like rephrasing, autofill, suggestions, etc. The answer is returned in form of visual or text. The user has the option of converting the Text reply to a visual as well.

The below image shows a question asked in natural language (spelling corrected automatically) and its answer in number which can also be converted to a visual.

 

The following image shows the answer converted to a visual.

9.     DAX Data Analysis Function

To perform functions on data, the user can use some predefined DAX Data Analysis functions. There are currently around 200 DAX predefined functions available in power BI. DAX or Data Analysis Expressions is a language used to interact with data on platforms like Power BI, PowerPivot and SSAS. It is simple and easy to learn and use.

10.   Support  & suggestion

In the Help tab, the user has a variety of options including support to resolve any query. The user can also give feedback or suggestions for improvement.

  • Integration with R

Power BI can be integrated with R scripts as well. This helps in data cleaning, data shaping and thus obtaining advanced analytics.

  • Security-

Power BI provides robust security where access to each member is controlled. It provides quick responses to security threats. It also provides features like continuous monitoring, reporting, data protection, and unified endpoint management.

Data Sources in Power BI

A collection of data that can be imported in PowerBI is known as a dataset. Through the Get Data feature, Power BI users can select from a range of data sources. The data sources can range anywhere from on-premise to cloud-based, unstructured to structured. New data sources are added every month. Data may be sourced from one or many different sources that can be combined together.

To source the data, click on the Get Data icon on the top of the screen. The data sources available for each category are as follows-

File category:

  • Excel
  • Text/CSV
  • XML
  • JSON
  • Folder
  • PDF
  • Parquet
  • SharePoint folder

Database category

  • SQL Server database
  • Access database
  • SQL Server Analysis Services database
  • Oracle database
  • IBM Db2 database
  • IBM Informix database (Beta)
  • IBM Netezza
  • MySQL database
  • PostgreSQL database
  • Sybase database
  • Teradata database
  • SAP HANA database
  • SAP Business Warehouse Application Server
  • SAP Business Warehouse Message Server
  • Amazon Redshift
  • Impala
  • Google BigQuery
  • Vertica
  • Snowflake
  • Essbase
  • Actian (Beta)
  • AtScale cubes
  • BI Connector
  • Data Virtuality LDW
  • Denodo
  • Dremio
  • Exasol
  • Indexima
  • InterSystems IRIS (Beta)
  • Jethro (Beta)
  • Kyligence
  • Linkar PICK Style / MultiValue Databases (Beta)
  • MariaDB (Beta)
  • MarkLogic
  • Amazon Athena (Beta)

Power Platform category 

  • Power BI datasets
  • Power BI dataflows
  • Common Data Service (Legacy)
  • Dataverse
  • Power Platform dataflows (Beta)

Azure category 

  • Azure SQL Database
  • Azure Synapse Analytics (SQL DW)
  • Azure Analysis Services database
  • Azure Database for PostgreSQL
  • Azure Blob Storage
  • Azure Table Storage
  • Azure Cosmos DB
  • Azure Data Explorer (Kusto)
  • Azure Data Lake Storage Gen2
  • Azure Data Lake Storage Gen1
  • Azure HDInsight (HDFS)
  • Azure HDInsight Spark
  • HDInsight Interactive Query
  • Azure Cost Management
  • Azure Databricks
  • Azure Time Series Insights (Beta)

 Online Services category 

  • SharePoint Online List
  • Microsoft Exchange Online
  • Dynamics 365 (online)
  • Dynamics NAV
  • Dynamics 365 Business Central
  • Dynamics 365 Business Central (on-premises)
  • Microsoft Azure Consumption Insights (Beta)
  • Azure DevOps (Boards only)
  • Azure DevOps Server (Boards only)
  • Salesforce Objects
  • Salesforce Reports
  • Google Analytics
  • Adobe Analytics
  • appFigures (Beta)
  • Data.World – Get Dataset (Beta)
  • GitHub (Beta)
  • LinkedIn Sales Navigator (Beta)
  • Marketo (Beta)
  • Mixpanel (Beta)
  • Planview Enterprise One – PRM (Beta)
  • QuickBooks Online (Beta)
  • Smartsheet
  • SparkPost (Beta)
  • SweetIQ (Beta)
  • Planview Enterprise One – CTM (Beta)
  • Twilio (Beta)
  • Zendesk (Beta)
  • Asana (Beta)
  • Assemble Views (Beta)
  • Automation Anywhere
  • Emigo Data Source
  • Entersoft Business Suite (Beta)
  • eWay-CRM (Beta)
  • FactSet Analytics
  • Palantir Foundry
  • Hexagon PPM Smart API
  • Industrial App Store
  • Intune Data Warehouse (Beta)
  • Projectplace for Power BI
  • Product Insights (beta)
  • Quick Base
  • SoftOne BI (beta)
  • Spigit (Beta)
  • TeamDesk (Beta)
  • Webtrends Analytics (Beta)
  • Witivio (Beta)
  • Workplace Analytics (Beta)
  • Zoho Creator (Beta)
  • Dynamics 365 Customer Insights (Beta)

Other categories

  • Web
  • SharePoint list
  • OData Feed
  • Active Directory
  • Microsoft Exchange
  • Hadoop File (HDFS)
  • Spark
  • Hive LLAP
  • R script
  • Python script
  • ODBC
  • OLE DB
  • Acterys : Model Automation & Planning (Beta)
  • Anaplan Connector v1.0 (Beta)
  • Solver
  • BQE Core (Beta)
  • Bloomberg Data and Analytics (Beta)
  • Cherwell (Beta)
  • Cognite Data Fusion
  • EQuIS (Beta)
  • FHIR
  • Information Grid (Beta)
  • Jamf Pro (Beta)
  • Kognitwin
  • MicroStrategy for Power BI
  • Paxata
  • QubolePresto (Beta)
  • Roamler (Beta)
  • Shortcuts Business Insights (Beta)
  • Siteimprove
  • Starburst Enterprise
  • SumTotal (Beta)
  • SurveyMonkey (Beta)
  • Microsoft Teams Personal Analytics (Beta)
  • Tenforce (Smart)List
  • TIBCO(R) Data Virtualization (Beta)
  • Vena (Beta)
  • Vessel Insight (Beta)
  • Zucchetti HR Infinity (Beta)
  • Blank Query

Companies using Power BI

The following are some of the companies currently using Power BI-

  1. Stryker
  2. Dematic
  3. Rockwell Automation
  4. GEICO
  5. Compass Group
  6. Helm

Steps for Installing Power BI 

  1. First go to the Microsoft Power BI desktop website- https://powerbi.microsoft.com/en-us/desktop/

 

2. Click on the Download  Free Button. The following Page appears.

 

3. Choose the language and click the Download button. The following page appears-

4. Select the file to download and click Next. The Power BI setup is downloaded.

5. Open the Power BI setup.

6.

7. Click Next.

8. Accept the terms and click next.

9. Select the Destination folder as required and click next.

10.

11. Click on Install.

The setup is installed.

Power BI Tutorial For Beginners | Power BI Training | Edureka

This video will help you to understand what is BI as well as Power BI. Then moving on in this video we have discussed the components and building blocks of Power BI.

Original article source at: https://www.edureka.co/

#PowerBI #tutorial 

Power BI Tutorial for Beginners
Oral  Brekke

Oral Brekke

1669211583

Design Your Web UI Using ReactJS JavaScript Library

ReactJS Tutorial – Design Your Web UI Using ReactJS JavaScript Library

Most of you would have heard about ‘ReactJS’ also known as React. For those of you curious to know more, I’ll be covering all the core concepts of React you need to know. By the end of this ReactJS tutorial, I’m confident that you will be clear with all the fundamentals of React. Let me start by giving you an overview of what I’ll be covering in this ReactJS tutorial.

  • Evolution Of React
  • Why Learn React?
  • React Features Overview
  • How does It work?
  • Building Blocks
  • React Installation

You may go through this recording of ReactJS Tutorial where our React training expert has explained the topics in a detailed manner with examples that will help you to understand the concept better.

Evolution Of React

React is a JavaScript library used to build the user interface for web applications. React was initially developed and maintained by the folks at Facebook, which was later used in their products (WhatsApp & Instagram). Now it is an open source project with an active developer community. Popular websites like Netflix, Airbnb, Yahoo!Mail, KhanAcademy, Dropbox and many more use React to build their UI. Modern websites are built using MVC (model view controller) architecture. React is the ‘V’ in the MVC which stands for view, whereas the architecture is provided by Redux or Flux. React native is used to develop mobile apps, the Facebook mobile app is built using React native.

Facebook’s annual F8 Developer conference 2017, saw two promising announcements: React Fiber and ReactVR. React Fiber is a complete rewrite of the previous release focusing on incremental rendering and quick responsiveness, React Fiber is backward compatible with all previous versions. ReactVR is built on top of React Native frameworks, it enables developing UI with the addition of 3D models to replicate 360-degree environment resulting in fully immersive VR content.

Why Learn React?

“Let’s just write less and do more!!”

React is among the easiest JS libraries you can start with. Conventional Vanilla JavaScript is more time-consuming, why waste time writing lengthy code when u can get things done smoothly with React. React has over 71,200 stars on GitHub, making it the 4th most starred project of all time. After looking at the below example, I am sure you would understand why front-end developers across the world are switching to React. Now let’s try coding a set of nested lists in React and compare it with conventional JavaScript syntax. To learn more about react, check out this Web developer course today.

Example: 30 lines of code in Vanilla JavaScript can be replaced by just 10 lines of React code, isn’t that awesome!!

React

<ol>
 
<li>List item 1 </li>
 
 
<li>List item 2 (child list)
 
<ul>
 
<li>Subitem 1</li>
 
 
<li>Subitem 2</li>
 
</ul>
 
</li>
 
 
<li>Final list item</li>
 
</ol>

Equivalent Vanilla JavaScript

React.createElement(
 "ol",
 null,
 React.createElement(
 "li",
 null,
 "List item 1 "
 ),
 React.createElement(
 "li",
 null,
 "List item 2 (child list)",
 React.createElement(
 "ul",
 null,
 React.createElement(
 "li",
 null,
 "Subitem 1"
 ),
 React.createElement(
 "li",
 null,
 "Subitem 2"
 )
 )
 ),
 React.createElement(
 "li",
 null,
 "Final list item"
 )
);

As you have already figured it out when the complexity increases, the JavaScript code generated becomes unmanageable. This is where JSX comes to the rescue ensuring the code is short and easily readable.

ReactJS Tutorial- Key Terminology

ReactJS Pre-requisites - ReactJS Tutorial - Edureka

 Figure: ReactJS Tutorial – Dependencies

Before we dive deeper into this ReactJS tutorial, let me first introduce you to some key terms you need to be familiar with.

JSX (JavaScript Extension)

JSX Allows us to include ‘HTML’ in the same file along with ‘JavaScript’ (HTML+JS=JSX). Each component in React generates some HTML which is rendered by the DOM.

ES6 (ES2015)

The sixth version of JavaScript is standardized by ECMA International in 2015. Hence the language is referred to as ECMAScript. ES6 is not completely supported by all modern browsers.

ES5(ES2009)

This is the fifth JavaScript version and is widely accepted by all modern browsers, it is based on the 2009 ECMA specification standard. Tools are used to convert ES6 to ES5 during runtime.

Webpack

A module bundler which generates a build file joining all the dependencies.

Babel

This is the tool used to convert ES6 to ES5. This is done because not all web browsers can render React (ES6+JSX) directly.

React Features Overview

React-Features - ReactJS Tutorial - EdurekaFigure: ReactJS Tutorial – React Features

Learning Curve

React has a shallow learning curve and it is suitable for beginners. ES6 syntax is easier to manage especially for smaller to-do apps. In React, you code in the ‘JavaScript’ way, giving you the freedom to choose your tool depending upon your need. Angular expects you to learn one additional tool ‘typescript’ which can be viewed as the ‘Angular’ way of doing things. In ‘Angular’ you need to learn the entire framework even if you’re just building a simple UI application.

Moving ahead in this ReactJS tutorial, I will be discussing about React’s Virtual DOM.

Concept: The Simplicity Of Virtual DOM

Virtual DOM - ReactJS Tutorial - Edureka

Figure : ReactJS Tutorial – React Virtual DOM

In contrary to the actual DOM, react makes use of the Virtual DOM. Virtual DOM utilizes a differential algorithm for making calculations. This relieves the real DOM which can then process other tasks. Let me illustrate this with an example.

Now consider there are 10,000 nodes out of which we only need to work on 2 nodes. Now most of the processing is wasted in traversing those 10,000 nodes while we only operate on 2 nodes. The calculations are done by the Virtual DOM to find those 2 nodes and the real DOM quickly retrieves them.

Performance

When it comes to performance, React sits right at the top. React is known for its superior rendering speed. Thus the name “React”, an instant reaction to change with minimum delay. DOM manipulation is the heart of a responsive website, unfortunately it is slow in most JavaScript frameworks. However, Virtual DOM is implemented in React, hence it is the underlying principle behind React’s superior performance.

Size

As we already know, React is not a framework, thus features may be added according to the user’s needs. This is the principle behind the light-weight applications built on React – pick only what is needed. Webpack offers several plugins which further minimize (minify) the size during production, The React + Redux bundle minified constitutes around 200 kb whereas its rival Angular is almost four times bigger (Angular + RxJS bundle).

Debugging

There will be a point when a developer goes through a roadblock. It could be as simple as a ‘missing bracket’ or as tricky as a ‘segmentation fault’. In any case, the earlier the exception is caught the lesser is the cost overhead. React uses compile time debugging and detects errors at an early stage. This ensures that errors don’t silently turn up at run-time. Facebook’s unidirectional data flow allows clean and smooth debugging, fewer stack traces, lesser clutter and an organized Flux architecture for bigger applications.

How Does It Work?

While React is easier to learn for beginners with no prior JavaScript experience, the nitty gritty’s of transpiling JSX code can often be overwhelming. This sets the tone for tools such as Babel and Webpack. Webpack and Babel bundle together all the JavaScript files into a single file. Just like how we use to include a link to the CSS and JS files in our HTML code, Webpack performs a similar function eliminating the need for explicitly linking files.

I’m sure all of you use Facebook. Now, imagine Facebook being split into components, each functionality is assigned to a specific component and each component produces some HTML which is rendered as output by the DOM. 

Facebook Components

  • Search Bar
  • Add Post 
  • Notifications Bar 
  • Feed Updates
  • Profile Info
  • Chat Window

To make things clear, refer to the image below.

Facebook Components - ReactJS Tutorial - Edureka

Figure: ReactJS Tutorial – Facebook Components

Building Blocks:-

  • Components
  • Props
  • State
  • State Lifecycle
  • Event handling 
  • Keys

Moving on to the core aspect of our ReactJS tutorial, let us discuss the building blocks of React.

Components

The entire application can be modeled as a set of independent components. Different components are used to serve different purposes. This enables us to keep logic and views separate. React renders multiple components simultaneously. Components can be either stateful or stateless.

Before we start creating components, we need to include a few ‘import’ statements.

In the first line, we have to instruct JavaScript to import the ‘react’ library from the installed ‘npm’ module. This takes care of all the dependencies needed by React.

import React from 'react';

The HTML generated by the component needs to be displayed on to the DOM, we achieve this by specifying a render function which tells React where exactly it needs to be rendered (displayed) on the screen. For this, we make a reference to an existing DOM node by passing a container element.

In React, the DOM is part of the ‘react-dom’ library. So in the next line, we have to instruct JavaScript to import ‘react-dom’ library from the installed npm module.

import ReactDOM from 'react-dom';

In our example, we create a component named ‘MyComponent’ which displays a welcome message. We pass the component instance ‘<MyComponent>’ to React along with its container  ‘<div >’ tag.

const MyComponent =()=> {
 
      {     return
<h2>Way to go you just created a component!!</h2>
 
;
 
 
      }
  
}
ReactDOM.render(<MyComponent/>, document.getElementById('root'));

Props

“All the user needs to do is, change the parent component’s state, while the changes are passed down to the child component through props.”

Props is a shorthand for properties (You guessed it right!). React uses ‘props’ to pass attributes from ‘parent’ component to ‘child’ component.

Props are the argument passed to the function or component which is ultimately processed by React. Let me illustrate this with an example.

function Message(props) {
    return
 
 
<h1>Good to have you back, {props.username}</h1>
 
 
 
;
}
function App() {
    return (
         
 
 
<div>
            <Message username="jim" />
            <Message username="duke" />
            <Message username="mike" />
        </div>
 
 
 
    );
}
 
 
    ReactDOM.render(
        <App/>,
    document.getElementById('root')
);

Here the ‘App’ component has passed three ‘Message’ component instances with the prop ‘username’. All the three usernames are passed as an argument to the Message component.

The output screen is as shown below:

React Props - ReactJS Tutorial - EdurekaFigure: ReactJS Tutorial – Props Output

State

“And I believe state adds the greatest value to React.”

State allows us to create components that are dynamic and interactive. State is private, it must not be manipulated from the outside. Also, it is important to know when to use ‘state’, it is generally used with data that is bound to change. For example, when we click a toggle button it changes from ‘inactive’ to ‘active’ state. State is used only when needed, make sure it is used in render() otherwise don’t include it in the state. We do not use ‘state’ with static components. The state can only be set inside the constructor. Let’s include some code snippets to explain the same.

class Toggle extends React.Component {
constructor(value)
{
super(value);
this.state = {isToggleOn: true};
this.handleClick = this.handleClick.bind(this);
}

Binding is needed explicitly, as by default the event is not bound to the component.

Event Handling And Manipulation Of State 

Whenever an event such as a button click or a mouse hover occurs, we need to handle these events and perform the appropriate actions. This is done using event handlers.

While State is set only once inside the constructor it can however, be manipulated through “setState” command. Whenever we call “handleclick” function based on the previous state, “isToggleOn” function is switched between “active” and “inactive” state.

handleClick()
{
this.setState(prevState =>({
isToggleOn: !prevState.isToggleOn
}));
}

The OnClick attribute specifies the function to be executed when the target element is clicked. In our example, whenever “onclick” is heard, we are telling React to transfer control to handleClick() which switches between the two states.

render()
{
return(
<button onClick={this.handleClick}>
{this.state.isToggleOn ? 'ON': 'OFF'}
 
);
}
}// end class

State Lifecycle

We need to initialize resources to components according to their requirements. This is called “mounting” in React. It is critical to clear these resources taken by the components when they are destroyed. This is because performance can be managed and unused resources can be destroyed. This is called “unmounting” in React. It is not essential to use state lifecycle methods, but use them if you wish to take control of the complete resource allocations and retrieval process. State lifecycle methods component DidMount() and componentWillUnmount() are used to allocate and release resources respectively.

Class Time extends React.component
{
constructor(value) {
super(value);
this.state = {date: new Date()};
}

We create an object called Timer ID and set an interval of 2 seconds. Now, this is the time interval based on which the page is refreshed.

componentDidMount() {
this.timerID = setInterval( () => this.tick(),2000);
}

Here the interval is the timeframe after which the resources are cleared and the component should be destroyed. Performing such manipulations on the dataset using ‘state’ can be viewed as an optimal approach.

componentWillUnmount() {clear interval(this.timerID);}

A timer is set to call tick() method once every two seconds. An object with current Date is passed to set state. Each time React calls the render() method, this.state.date value is different. React then displays the updated time on the screen.

tick(){this.setState({date:new Date()});} 
render()
{ 
return ( 
  
 
 
<div>
     
 
 
<h2>The Time is {this.state.date.toLocaleTimeString()}.</h2>
 
 
 
</div>
 
 
 
     );
   } 
ReactDOM.render( <Time />, document.getElementById('root') );
}// end class

Keys

Keys in React provide identity to components. Keys are the means by which React identifies components uniquely. While working with individual components we don’t need keys as react takes care of key assignments according to their rendering order. However, we need a strategy to differentiate between thousands of elements in a list. We assign them ‘keys’. If we need to access the last component in a list using keys, it saves us from traversing the entire list sequentially. Keys serve to keep track of which items have been manipulated. Keys should be given to the elements inside the array to give the elements a stable identity.

In our example below, we create an array ‘Data’ with four items, we assign each item the index ‘i’ as the key. We achieve this by defining the key as a property(‘Prop’) and use the JavaScript ‘map’ function to pass the key on each element of the array and return the result to the ‘content’ component.

class App extends React.Component {
constructor() {
super();
 
this.state = {
data:
[
{
item: 'Java',
id: '1'
},
 
{
item: 'React',
id: '2'
},
 
{
item: 'Python',
id: '3'
},
{
item: 'C#',
id: '4'
}
]
}
render() {
        return (
             
 
 
<div>
                 
 
 
<div>
                    {this.state.data.map((dynamicComponent, i) => <Content key = {i} componentData = {dynamicComponent}/>)}
                </div>
 
 
 
            </div>
 
 
 
        );
    }
}
 
class Content extends React.Component {
    render() {
        return (
             
 
 
<div>
                 
 
 
<div>{this.props.componentData.component}</div>
 
 
 
                 
 
 
<div>{this.props.componentData.id}</div>
 
 
 
            </div>
 
 
 
        );
 
    }
 
}
 
ReactDOM.render(
    <App/>,
    document.getElementById('root'));

React Installation

There are several ways to install React. In short, we can either configure the dependencies manually or use the open source starter packs available on GitHub. The ‘create-react-app’ (CRA) tool maintained by Facebook itself is one such example. This is suitable for beginners who can focus on code, without manually having to deal with transpiling tools like webpack and Babel. In this ReactJS tutorial I will be showing you how to install React using CRA tool.

Npm: Node Package Manager manages the different dependencies needed to run ReactJs applications. Npm is bundled together with node.js.

Step 1: Download NodeJS

First go to the node.js website and download the .exe file according to your system configuration and install it.

Link: https://nodejs.org/en/download/

Step 2: Download the ‘create-react-app’ Tool from GitHub 

Link: https://github.com/facebookincubator/create-react-app

Step 3: Open cmd prompt and navigate to the project directory.

Now, enter the following commands

->  npm install -g create-react-app
->  cd my-app 
->  create-react-app my-app

Step 4:-> npm start 

Once we type “npm start” the application will start execution on port 3000. Open http://localhost:3000/, you will be greeted by this page.

Welcome Page - ReactJS Tutorial - Edureka

Figure: ReactJS Tutorial – Welcome Page

This is how the file structure should look once you have successfully installed React.

my-app
├── README.md
├── node_modules
├── package.json
├── .gitignore
├── public
│   └── favicon.ico
│   └── index.html
│   └── manifest.json
└── src
    └── App.css
    └── App.js
    └── App.test.js
    └── index.css
    └── index.js
    └── logo.svg
    └── registerServiceWorker.js

When you are creating new apps, all you need to do is update the file ‘App.js’ and the changes will be reflected automatically, other files can be added or removed. Make sure you put all CSS and JS files inside the ‘/src’ directory.

This brings us to the end of this ReactJS tutorial blog. Hope each and every aspect I discussed above is clear to you all. To learn more check out our courses on React.

If you found this blog on “ReactJS tutorial” relevant, check out the Web Development Course Training by Edureka, a trusted online learning company with a network of more than 250,000 satisfied learners spread across the globe. This Edureka course helps learners gain expertise in both fundamental and advanced topics in React enabling you to develop full-fledged, dynamic web applications on the go.

Got a question for us? Please mention it in the comments section and we will get back to you.

ReactJS Tutorial For Beginners | ReactJS Redux Training For Beginners | Edureka

Original article source at: https://www.edureka.co/

#reactjs #tutorial #javascript 

Design Your Web UI Using ReactJS JavaScript Library
Nat  Grady

Nat Grady

1669186933

How to Continuous integration using Jenkins

Jenkins Tutorial

Jenkins is one of the most important tools in DevOps. I hope you have read my previous blog on What is Jenkins. In this Jenkins Tutorial blog, I will focus on Jenkins architecture and Jenkins build pipeline along with that I will show you how to create a build in Jenkins.

Before we proceed with Jenkins Tutorial, the key takeaways from the previous blog are:

  • Jenkins is used to integrate all DevOps stages with the help of plugins.
  • Commonly used Jenkins plugins are Git, Amazon EC2, Maven 2 project, HTML publisher etc.
  • Jenkins has well over 1000 plugins and 147,000 active installations along with over 1 million users around the world.
  • With Continuous Integration every change made in the source code is built. It performs other functions as well, that depends on the tool used for Continuous Integration.
  • Nokia shifted from Nightly build to Continuous Integration.
  • Process before Continuous Integration had many flaws. As a result, not only the software delivery was slow but the quality of software was also not up to the mark. Developers also had a tough time in locating and fixing bugs.
  • Continuous Integration with Jenkins overcame these shortcomings by continuously triggering a build and test for every change made in the source code.

Now is the correct time to understand Jenkins architecture.

Jenkins Architecture

Let us revise the standalone Jenkins architecture that I have explained to you in the previous blog, below diagram depicts the same.

Jenkins Standalone Architecture - What is Jenkins - Edureka

This single Jenkins server was not enough to meet certain requirements like:

  • Sometimes you might need several different environments to test your builds. This cannot be done by a single Jenkins server.
  • If larger and heavier projects get built on a regular basis then a single Jenkins server cannot simply handle the entire load.

To address the above stated needs, Jenkins distributed architecture was introduced.

 

Jenkins Distributed Architecture

Jenkins uses a Master-Slave architecture to manage distributed builds. In this architecture, Master and Slave communicate through TCP/IP protocol.

Jenkins Master

Your main Jenkins server is the Master. The Master’s job is to handle:

  • Scheduling build jobs.
  • Dispatching builds to the slaves for the actual execution.
  • Monitor the slaves (possibly taking them online and offline as required).
  • Recording and presenting the build results.
  • A Master instance of Jenkins can also execute build jobs directly.

Jenkins Slave

A Slave is a Java executable that runs on a remote machine. Following are the characteristics of Jenkins Slaves:

  • It hears requests from the Jenkins Master instance.
  • Slaves can run on a variety of operating systems.
  • The job of a Slave is to do as they are told to, which involves executing build jobs dispatched by the Master.
  • You can configure a project to always run on a particular Slave machine, or a particular type of Slave machine, or simply let Jenkins pick the next available Slave.

The diagram below is self explanatory. It consists of a Jenkins Master which is managing three Jenkins Slave.

Jenkins Distributed Architecture - Jenkins Tutorial - Edureka

Now let us look at an example in which Jenkins is used for testing in different environments like: Ubuntu, MAC, Windows etc.

The diagram below represents the same:

Distributed Testing - Jenkins Tutorial - Edureka

The following functions are performed in the above image:

  • Jenkins checks the Git repository at periodic intervals for any changes made in the source code.
  • Each builds requires a different testing environment which is not possible for a single Jenkins server. In order to perform testing in different environments Jenkins uses various Slaves as shown in the diagram.
  • Jenkins Master requests these Slaves to perform testing and to generate test reports.

 

Jenkins Build Pipeline

It is used to know which task Jenkins is currently executing. Often several different changes are made by several developers at once, so it is useful to know which change is getting tested or which change is sitting in the queue or which build is broken. This is where pipeline comes into picture. The Jenkins Pipeline gives you an overview of where tests are up to. In build pipeline the build as a whole is broken down into sections, such as the unit test, acceptance test, packaging, reporting and deployment phases. The pipeline phases can be executed in series or parallel, and if one phase is successful, it automatically moves on to the next phase (hence the relevance of the name “pipeline”).The below image shows how a multiple build Pipeline looks like.

Jenkins Build Pipeline - Jenkins Tutorial - Edureka

Hope you have understood the theoretical concepts. Now, let’s have some fun with hands-on.

I will create a new job in Jenkins, it is a Freestyle Project. However, there are 3 more options available. Let us look at the types of build jobs available in Jenkins.

Freestyle Project:

Freestyle build jobs are general-purpose build jobs, which provides maximum flexibility. The freestyle build job is the most flexible and configurable option, and can be used for any type of project. It is relatively straightforward to set up, and many of the options we configure here also appear in other build jobs.

 Multiconfiguration Job:

The “multiconfiguration project” (also referred to as a “matrix project”) allows you run the same build job on different environments. It is used for testing an application in different environments, with different databases, or even on different build machines.

Monitor an External Job:

The “Monitor an external job” build job lets you keep an eye on non-interactive processes, such as cron jobs. 

Maven Project:

The “maven2/3 project” is a build job specially adapted to Maven projects. Jenkins understands Maven pom files and project structures, and can use the information gleaned from the pom file to reduce the work you need to do to set up your project.

Here is a video on Jenkins tutorial for better understanding of Jenkins. Check out this Jenkins tutorial video.

Getting Started With Jenkins | Jenkins and DevOps tutorial | Jenkins for Beginners | Edureka

Creating a Build Using Jenkins

Step 1: From the Jenkins interface home, select New Item.

Jenkins Dashboard - Jenkins Tutorial - Edureka

Step 2: Enter a name and select Freestyle project.

Jenkins Freestyle Project - Jenkins Tutorial - Edureka

Step 3: This next page is where you specify the job configuration. As you’ll quickly observe, there are a number of settings available when you create a new project. On this configuration page, you also have the option to Add build step to perform extra actions like running scripts. I will execute a shell script.

Jenkins Execute Shell Script - Jenkins Tutorial - EdurekaThis will provide you with a text box in which you can add whatever commands you need. You can use scripts to run various tasks like server maintenance, version control, reading system settings, etc. I will use this section to run a simple script.

Shell Script - Jenkins Tutorial - Edureka

Step 4: Save the project, and you’ll be taken to a project overview page. Here you can see information about the project, including its built history.

Project Overview - Jenkins Tutorial - Edureka

Step 5: Click Build Now on the left-hand side to start the build.

Build Now - Jenkins Tutorial - Edureka

Step 6: To see more information, click on that build in the build history area, whereupon you’ll be taken to a page with an overview of the build information.

Build History - Jenkins Tutorial - Edureka

Step 7: The Console Output link on this page is especially useful for examining the results of the job in detail.

Console Output - Jenkins Tutorial - Edureka

Step 8: If you go back to Jenkins home, you’ll see an overview of all projects and their information, including status.

Build Status - Jenkins Tutorial - Edureka

Status of the build is indicated in two ways, by a weather icon and by a colored ball. The weather icon is particularly helpful as it shows you a record of multiple builds in one image.

As you can see in the above image, the sun represents that all of my builds were successful. The color of the ball gives us the status of that particular build, in the above image the color of the ball is blue which means that this particular build was successful.

In this Jenkins Tutorial, I have just given an introductory example. In my next blog, I will show you how to pull and build code from the GitHub repository using Jenkins.

If you found this Jenkins Tutorial relevant, check out the DevOps training by Edureka, a trusted online learning company with a network of more than 250,000 satisfied learners spread across the globe. The Edureka DevOps Certification Training course helps learners gain expertise in various DevOps processes and tools such as Puppet, Jenkins, Nagios and GIT for automating multiple steps in SDLC.

Got a question for us? Please mention it in the comments section and we will get back to you.

Original article source at: https://www.edureka.co/

#jenkins #tutorial 

How to Continuous integration using Jenkins
Desmond  Gerber

Desmond Gerber

1669013905

How What and How Reflection Works in Kotlin with This Tutorial

Learn how what and how reflection works in Kotlin with this tutorial

In programming, reflection is a programming language’s ability to inspect and interact with statically defined classes, functions, and properties during runtime.

The feature is particularly useful when you receive an object instance of an unknown class.

By using reflection, you can check if a particular object has a certain method, and call that method when it exists.

To use reflection in Kotlin, you need to include the kotlin-reflect library in your project:

dependencies {
    implementation("org.jetbrains.kotlin:kotlin-reflect:1.6.10")
}

The library contains the runtime component required for using Kotlin reflection features.

Next, let’s see how you can get class, function, and property references using Kotlin reflection feature.

Kotlin reflection - class reference

Suppose you have a Dog class with the following definitions:

class Dog(var name: String) {
    fun bark() {
        println("Bark!")
    }

    fun bark(sound: String) {
        println(sound)
    }

    private fun hello() {
        println("Hello! My name is $name")
    }
}

To get the class reference in Kotlin, you can use the class literal syntax ::class as shown below:

val classRef = Dog::class

Alternatively, you can get the class reference from an object instance by using the same ::class syntax on the instance:

val myDog = Dog("Puppy")

val classRef = myDog::class

Getting the class reference from an object is also known as a bounded class reference.

Once you have the class reference, you can access the properties of the reference to find out more about that class.

For example, you can find the name of the class and check if that class is a data class:

println(classRef.simpleName) // Dog
println(classRef.qualifiedName) // org.metapx.Dog
println(classRef.isData) // false

In Kotlin, the class reference is identified as the Kclass type which stands for Kotlin class.

You can check the Kclass documentation for all properties and methods you can access to find out about the class from its reference.

Aside from inspecting the class, the Kclass also has some interesting abilities. The createInstance() method, for example, allows you to create a new object from the class reference:

val secondDog = classRef.createInstance()

But keep in mind that the createInstance() method only works when the class has a constructor with optional or no parameter.

An error will be thrown when no constructor fulfills the criteria.

Accessing Kotlin class reference methods

You can also get access to the methods of the class reference regardless of their access modifier.

This means even private functions of a class can be accessed from its reference.

The memberFunctions property of Kclass stores all methods of the class as a Collection:

val myDog = Dog("Puppy")

val classRef = myDog::class

classRef.memberFunctions.forEach { 
    println(it.name) 
}

The output will be as follows:

bark
bark
hello
equals
hashCode
toString

Next, you can call the class function from its reference as follows:

val myDog = Dog("Puppy")

val classRef = myDog::class

val barkRef = classRef.memberFunctions.find { 
    it.name == "bark" 
}

barkRef?.call(myDog)

First, you need to use the find() function to retrieve the function reference.

Then, check if the function reference is found using the null-safe call.

When the reference is found, use the call() method from the function reference Kfunction type.

The first argument of the call() method must be an instance of the class reference, which is why myDog object is passed into the method.

When your function is private, you need to change the isAccessible property of the function reference as true first before calling the function:

val helloRef = classRef.memberFunctions.find { 
    it.name == "hello" 
}

helloRef?.isAccessible = true

helloRef?.call(myDog)

And that’s how you access the methods of a class using its reference.

Accessing Kotlin class reference properties

The properties of a Kotlin class reference can be accessed the same way you access its methods.

The properties of a class are stored in memberProperties as a Collection.

For example, you can get the name property value of the myDog instance as follows:

val myDog = Dog("Puppy")

val classRef = myDog::class

val nameRef = classRef.memberProperties.find {
    it.name == "name" 
}

println(nameRef?.getter?.call(myDog)) // Puppy

A property reference is an instance of KProperty type. The value of the property is retrieved by calling the getter() method.

To change the value of the name property, you need to cast the property into KMutableProperty first as shown below:

val myDog = Dog("Puppy")

val classRef = myDog::class

val nameRef = classRef.memberProperties.find {
    it.name == "name" 
} as KMutableProperty<*>?

nameRef?.setter?.call(myDog, "Jacob")

println(myDog.name) // Jacob

The KMutableProperty holds the setter() method, which you need to call to set the value of the property.

Now you’ve learned how to access methods and properties from a class reference.

Next, let’s look at how you can get a function reference with Kotlin reflection

Kotlin reflection - function reference

You can get a reference to a named Kotlin function by using the :: operator.

Here’s an example:

fun hello() {
    println("Hello World!")
}

val funRef = ::hello

funRef() // Hello World!

The funRef above will be an instance of Kfunction type, which represents a function with introspection capabilities.

Conclusion

Now you’ve learned what the Kotlin reflection feature is and how it works with some examples. Reflection is a powerful feature that’s only used for specific requirements.

Because of its ability to find inspect a source code, it’s frequently used when developing a framework or library for further development.

JUnit and Spring frameworks are notable for using reflection in their source code.

The library author won’t know the classes and functions created by the user. Reflection allows the framework to deal with classes and functions without knowing about them in advance.

Original article source at: https://sebhastian.com/

#kotlin #tutorial #reflection  

How What and How Reflection Works in Kotlin with This Tutorial