1672492620
Have Fun with Machine Learning: A Guide for Beginners
This is a hands-on guide to machine learning for programmers with no background in AI. Using a neural network doesn’t require a PhD, and you don’t need to be the person who makes the next breakthrough in AI in order to use what exists today. What we have now is already breathtaking, and highly usable. I believe that more of us need to play with this stuff like we would any other open source technology, instead of treating it like a research topic.
In this guide our goal will be to write a program that uses machine learning to predict, with a high degree of certainty, whether the images in data/untrained-samples are of dolphins or seahorses using only the images themselves, and without having seen them before. Here are two example images we'll use:
To do that we’re going to train and use a Convolutional Neural Network (CNN). We’re going to approach this from the point of view of a practitioner vs. from first principles. There is so much excitement about AI right now, but much of what’s being written feels like being taught to do tricks on your bike by a physics professor at a chalkboard instead of your friends in the park.
I’ve decided to write this on Github vs. as a blog post because I’m sure that some of what I’ve written below is misleading, naive, or just plain wrong. I’m still learning myself, and I’ve found the lack of solid beginner documentation an obstacle. If you see me making a mistake or missing important details, please send a pull request.
With all of that out the way, let me show you how to do some tricks on your bike!
Here’s what we’re going to explore:
This guide won’t teach you how neural networks are designed, cover much theory, or use a single mathematical expression. I don’t pretend to understand most of what I’m going to show you. Instead, we’re going to use existing things in interesting ways to solve a hard problem.
Q: "I know you said we won’t talk about the theory of neural networks, but I’m feeling like I’d at least like an overview before we get going. Where should I start?"
There are literally hundreds of introductions to this, from short posts to full online courses. Depending on how you like to learn, here are three options for a good starting point:
Installing the software we'll use (Caffe and DIGITS) can be frustrating, depending on your platform and OS version. By far the easiest way to do it is using Docker. Below we examine how to do it with Docker, as well as how to do it natively.
First, we’re going to be using the Caffe deep learning framework from the Berkely Vision and Learning Center (BSD licensed).
Q: “Wait a minute, why Caffe? Why not use something like TensorFlow, which everyone is talking about these days…”
There are a lot of great choices available, and you should look at all the options. TensorFlow is great, and you should play with it. However, I’m using Caffe for a number of reasons:
But the number one reason I’m using Caffe is that you don’t need to write any code to work with it. You can do everything declaratively (Caffe uses structured text files to define the network architecture) and using command-line tools. Also, you can use some nice front-ends for Caffe to make training and validating your network a lot easier. We’ll be using nVidia’s DIGITS tool below for just this purpose.
Caffe can be a bit of work to get installed. There are installation instructions for various platforms, including some prebuilt Docker or AWS configurations.
NOTE: when making my walkthrough, I used the following non-released version of Caffe from their Github repo: https://github.com/BVLC/caffe/commit/5a201dd960840c319cefd9fa9e2a40d2c76ddd73
On a Mac it can be frustrating to get working, with version issues halting your progress at various steps in the build. It took me a couple of days of trial and error. There are a dozen guides I followed, each with slightly different problems. In the end I found this one to be the closest. I’d also recommend this post, which is quite recent and links to many of the same discussions I saw.
Getting Caffe installed is by far the hardest thing we'll do, which is pretty neat, since you’d assume the AI aspects would be harder! Don’t give up if you have issues, it’s worth the pain. If I was doing this again, I’d probably use an Ubuntu VM instead of trying to do it on Mac directly. There's also a Caffe Users group, if you need answers.
Q: “Do I need powerful hardware to train a neural network? What if I don’t have access to fancy GPUs?”
It’s true, deep neural networks require a lot of computing power and energy to train...if you’re training them from scratch and using massive datasets. We aren’t going to do that. The secret is to use a pretrained network that someone else has already invested hundreds of hours of compute time training, and then to fine tune it to your particular dataset. We’ll look at how to do this below, but suffice it to say that everything I’m going to show you, I’m doing on a year old MacBook Pro without a fancy GPU.
As an aside, because I have an integrated Intel graphics card vs. an nVidia GPU, I decided to use the OpenCL Caffe branch, and it’s worked great on my laptop.
When you’re done installing Caffe, you should have, or be able to do all of the following:
build/
dir which contains everything you need to run caffe, the Python bindings, etc. The parent dir that contains build/
will be your CAFFE_ROOT
(we’ll need this later).make test && make runtest
should passpip install -r requirements.txt
in python/
), running make pycaffe && make pytest
should passmake distribute
in order to create a distributable version of caffe with all necessary headers, binaries, etc. in distribute/
.On my machine, with Caffe fully built, I’ve got the following basic layout in my CAFFE_ROOT dir:
caffe/
build/
python/
lib/
tools/
caffe ← this is our main binary
distribute/
python/
lib/
include/
bin/
proto/
At this point, we have everything we need to train, test, and program with neural networks. In the next section we’ll add a user-friendly, web-based front end to Caffe called DIGITS, which will make training and testing our networks much easier.
nVidia’s Deep Learning GPU Training System, or DIGITS, is BSD-licensed Python web app for training neural networks. While it’s possible to do everything DIGITS does in Caffe at the command-line, or with code, using DIGITS makes it a lot easier to get started. I also found it more fun, due to the great visualizations, real-time charts, and other graphical features. Since you’re experimenting and trying to learn, I highly recommend beginning with DIGITS.
There are quite a few good docs at https://github.com/NVIDIA/DIGITS/tree/master/docs, including a few Installation, Configuration, and Getting Started pages. I’d recommend reading through everything there before you continue, as I’m not an expert on everything you can do with DIGITS. There's also a public DIGITS User Group if you have questions you need to ask.
There are various ways to install and run DIGITS, from Docker to pre-baked packages on Linux, or you can build it from source. I’m on a Mac, so I built it from source.
NOTE: In my walkthrough I've used the following non-released version of DIGITS from their Github repo: https://github.com/NVIDIA/DIGITS/commit/81be5131821ade454eb47352477015d7c09753d9
Because it’s just a bunch of Python scripts, it was fairly painless to get working. The one thing you need to do is tell DIGITS where your CAFFE_ROOT
is by setting an environment variable before starting the server:
export CAFFE_ROOT=/path/to/caffe
./digits-devserver
NOTE: on Mac I had issues with the server scripts assuming my Python binary was called python2
, where I only have python2.7
. You can symlink it in /usr/bin
or modify the DIGITS startup script(s) to use the proper binary on your system.
Once the server is started, you can do everything else via your web browser at http://localhost:5000, which is what I'll do below.
Install Docker, if not already installed, then run the following command in order to pull and run a full Caffe + Digits container. A few things to note:
/path/to/this/repository
to the location of this cloned repo, and /data/repo
within the container will be bound to this directory. This is useful for accessing the images discussed below.docker run --name digits -d -p 8080:5000 -v /path/to/this/repository:/data/repo kaixhin/digits
Now that we have our container running you can open up your web browser and open http://localhost:8080
. Everything in the repository is now in the container directory /data/repo
. That's it. You've now got Caffe and DIGITS working.
If you need shell access, use the following command:
docker exec -it digits /bin/bash
Training a neural network involves a few steps:
We’re going to do this 3 different ways, in order to show the difference between starting from scratch and using a pretrained network, and also to show how to work with two popular pretrained networks (AlexNet, GoogLeNet) that are commonly used with Caffe and DIGITs.
For our training attempts, we’ll use a small dataset of Dolphins and Seahorses. I’ve put the images I used in data/dolphins-and-seahorses. You need at least 2 categories, but could have many more (some of the networks we’ll use were trained on 1000+ image categories). Our goal is to be able to give an image to our network and have it tell us whether it’s a Dolphin or a Seahorse.
The easiest way to begin is to divide your images into a categorized directory layout:
dolphins-and-seahorses/
dolphin/
image_0001.jpg
image_0002.jpg
image_0003.jpg
...
seahorse/
image_0001.jpg
image_0002.jpg
image_0003.jpg
...
Here each directory is a category we want to classify, and each image within that category dir an example we’ll use for training and validation.
Q: “Do my images have to be the same size? What about the filenames, do they matter?”
No to both. The images sizes will be normalized before we feed them into the network. We’ll eventually want colour images of 256 x 256 pixels, but DIGITS will crop or squash (we'll squash) our images automatically in a moment. The filenames are irrelevant--it’s only important which category they are contained within.
Q: “Can I do more complex segmentation of my categories?”
Yes. See https://github.com/NVIDIA/DIGITS/blob/digits-4.0/docs/ImageFolderFormat.md.
We want to use these images on disk to create a New Dataset, and specifically, a Classification Dataset.
We’ll use the defaults DIGITS gives us, and point Training Images at the path to our data/dolphins-and-seahorses folder. DIGITS will use the categories (dolphin
and seahorse
) to create a database of squashed, 256 x 256 Training (75%) and Testing (25%) images.
Give your Dataset a name,dolphins-and-seahorses
, and click Create.
This will create our dataset, which took only 4s on my laptop. In the end I have 92 Training images (49 dolphin, 43 seahorse) in 2 categories, with 30 Validation images (16 dolphin, 14 seahorse). It’s a really small dataset, but perfect for our experimentation and learning purposes, because it won’t take forever to train and validate a network that uses it.
You can Explore the db if you want to see the images after they have been squashed.
Back in the DIGITS Home screen, we need to create a new Classification Model:
We’ll start by training a model that uses our dolphins-and-seahorses
dataset, and the default settings DIGITS provides. For our first network, we’ll choose to use one of the standard network architectures, AlexNet (pdf). AlexNet’s design won a major computer vision competition called ImageNet in 2012. The competition required categorizing 1000+ image categories across 1.2 million images.
Caffe uses structured text files to define network architectures. These text files are based on Google’s Protocol Buffers. You can read the full schema Caffe uses. For the most part we’re not going to work with these, but it’s good to be aware of their existence, since we’ll have to modify them in later steps. The AlexNet prototxt file looks like this, for example: https://github.com/BVLC/caffe/blob/master/models/bvlc_alexnet/train_val.prototxt.
We’ll train our network for 30 epochs, which means that it will learn (with our training images) then test itself (using our validation images), and adjust the network’s weights depending on how well it’s doing, and repeat this process 30 times. Each time it completes a cycle we’ll get info about Accuracy (0% to 100%, where higher is better) and what our Loss is (the sum of all the mistakes that were made, where lower is better). Ideally we want a network that is able to predict with high accuracy, and with few errors (small loss).
NOTE: some people have reported hitting errors in DIGITS doing this training run. For many, the problem related to available memory (the process needs a lot of memory to work). If you're using Docker, you might want to try increasing the amount of memory available to DIGITS (in Docker, preferences -> advanced -> memory).
Initially, our network’s accuracy is a bit below 50%. This makes sense, because at first it’s just “guessing” between two categories using randomly assigned weights. Over time it’s able to achieve 87.5% accuracy, with a loss of 0.37. The entire 30 epoch run took me just under 6 minutes.
We can test our model using an image we upload or a URL to an image on the web. Let’s test it on a few examples that weren’t in our training/validation dataset:
It almost seems perfect, until we try another:
Here it falls down completely, and confuses a seahorse for a dolphin, and worse, does so with a high degree of confidence.
The reality is that our dataset is too small to be useful for training a really good neural network. We really need 10s or 100s of thousands of images, and with that, a lot of computing power to process everything.
Designing a neural network from scratch, collecting data sufficient to train it (e.g., millions of images), and accessing GPUs for weeks to complete the training is beyond the reach of most of us. To make it practical for smaller amounts of data to be used, we employ a technique called Transfer Learning, or Fine Tuning. Fine tuning takes advantage of the layout of deep neural networks, and uses pretrained networks to do the hard work of initial object detection.
Imagine using a neural network to be like looking at something far away with a pair of binoculars. You first put the binoculars to your eyes, and everything is blurry. As you adjust the focus, you start to see colours, lines, shapes, and eventually you are able to pick out the shape of a bird, then with some more adjustment you can identify the species of bird.
In a multi-layered network, the initial layers extract features (e.g., edges), with later layers using these features to detect shapes (e.g., a wheel, an eye), which are then feed into final classification layers that detect items based on accumulated characteristics from previous layers (e.g., a cat vs. a dog). A network has to be able to go from pixels to circles to eyes to two eyes placed in a particular orientation, and so on up to being able to finally conclude that an image depicts a cat.
What we’d like to do is to specialize an existing, pretrained network for classifying a new set of image classes instead of the ones on which it was initially trained. Because the network already knows how to “see” features in images, we’d like to retrain it to “see” our particular image types. We don’t need to start from scratch with the majority of the layers--we want to transfer the learning already done in these layers to our new classification task. Unlike our previous attempt, which used random weights, we’ll use the existing weights of the final network in our training. However, we’ll throw away the final classification layer(s) and retrain the network with our image dataset, fine tuning it to our image classes.
For this to work, we need a pretrained network that is similar enough to our own data that the learned weights will be useful. Luckily, the networks we’ll use below were trained on millions of natural images from ImageNet, which is useful across a broad range of classification tasks.
This technique has been used to do interesting things like screening for eye diseases from medical imagery, identifying plankton species from microscopic images collected at sea, to categorizing the artistic style of Flickr images.
Doing this perfectly, like all of machine learning, requires you to understand the data and network architecture--you have to be careful with overfitting of the data, might need to fix some of the layers, might need to insert new layers, etc. However, my experience is that it “Just Works” much of the time, and it’s worth you simply doing an experiment to see what you can achieve using our naive approach.
In our first attempt, we used AlexNet’s architecture, but started with random weights in the network’s layers. What we’d like to do is download and use a version of AlexNet that has already been trained on a massive dataset.
Thankfully we can do exactly this. A snapshot of AlexNet is available for download: https://github.com/BVLC/caffe/tree/master/models/bvlc_alexnet. We need the binary .caffemodel
file, which is what contains the trained weights, and it’s available for download at http://dl.caffe.berkeleyvision.org/bvlc_alexnet.caffemodel.
While you’re downloading pretrained models, let’s get one more at the same time. In 2014, Google won the same ImageNet competition with GoogLeNet (codenamed Inception): a 22-layer neural network. A snapshot of GoogLeNet is available for download as well, see https://github.com/BVLC/caffe/tree/master/models/bvlc_googlenet. Again, we’ll need the .caffemodel
file with all the pretrained weights, which is available for download at http://dl.caffe.berkeleyvision.org/bvlc_googlenet.caffemodel.
With these .caffemodel
files in hand, we can upload them into DIGITs. Go to the Pretrained Models tab in DIGITs home page and choose Upload Pretrained Model:
For both of these pretrained models, we can use the defaults DIGITs provides (i.e., colour, squashed images of 256 x 256). We just need to provide the Weights (**.caffemodel)
and Model Definition (original.prototxt)
. Click each of those buttons to select a file.
For the model definitions we can use https://github.com/BVLC/caffe/blob/master/models/bvlc_googlenet/train_val.prototxt for GoogLeNet and https://github.com/BVLC/caffe/blob/master/models/bvlc_alexnet/train_val.prototxt for AlexNet. We aren’t going to use the classification labels of these networks, so we’ll skip adding a labels.txt
file:
Repeat this process for both AlexNet and GoogLeNet, as we’ll use them both in the coming steps.
Q: "Are there other networks that would be good as a basis for fine tuning?"
The Caffe Model Zoo has quite a few other pretrained networks that could be used, see https://github.com/BVLC/caffe/wiki/Model-Zoo.
Training a network using a pretrained Caffe Model is similar to starting from scratch, though we have to make a few adjustments. First, we’ll adjust the Base Learning Rate to 0.001 from 0.01, since we don’t need to make such large jumps (i.e., we’re fine tuning). We’ll also use a Pretrained Network, and Customize it.
In the pretrained model’s definition (i.e., prototext), we need to rename all references to the final Fully Connected Layer (where the end result classifications happen). We do this because we want the model to re-learn new categories from our dataset vs. its original training data (i.e., we want to throw away the current final layer). We have to rename the last fully connected layer from “fc8” to something else, “fc9” for example. Finally, we also need to adjust the number of categories from 1000
to 2
, by changing num_output
to 2
.
Here are the changes we need to make:
@@ -332,8 +332,8 @@
}
layer {
- name: "fc8"
+ name: "fc9"
type: "InnerProduct"
bottom: "fc7"
- top: "fc8"
+ top: "fc9"
param {
lr_mult: 1
@@ -345,5 +345,5 @@
}
inner_product_param {
- num_output: 1000
+ num_output: 2
weight_filler {
type: "gaussian"
@@ -359,5 +359,5 @@
name: "accuracy"
type: "Accuracy"
- bottom: "fc8"
+ bottom: "fc9"
bottom: "label"
top: "accuracy"
@@ -367,5 +367,5 @@
name: "loss"
type: "SoftmaxWithLoss"
- bottom: "fc8"
+ bottom: "fc9"
bottom: "label"
top: "loss"
@@ -375,5 +375,5 @@
name: "softmax"
type: "Softmax"
- bottom: "fc8"
+ bottom: "fc9"
top: "softmax"
include { stage: "deploy" }
I’ve included the fully modified file I’m using in src/alexnet-customized.prototxt.
This time our accuracy starts at ~60% and climbs right away to 87.5%, then to 96% and all the way up to 100%, with the Loss steadily decreasing. After 5 minutes we end up with an accuracy of 100% and a loss of 0.0009.
Testing the same seahorse image our previous network got wrong, we see a complete reversal: 100% seahorse.
Even a children’s drawing of a seahorse works:
The same goes for a dolphin:
Even with images that you think might be hard, like this one that has multiple dolphins close together, and with their bodies mostly underwater, it does the right thing:
Like the previous AlexNet model we used for fine tuning, we can use GoogLeNet as well. Modifying the network is a bit trickier, since you have to redefine three fully connected layers instead of just one.
To fine tune GoogLeNet for our use case, we need to once again create a new Classification Model:
We rename all references to the three fully connected classification layers, loss1/classifier
, loss2/classifier
, and loss3/classifier
, and redefine the number of categories (num_output: 2
). Here are the changes we need to make in order to rename the 3 classifier layers, as well as to change from 1000 to 2 categories:
@@ -917,10 +917,10 @@
exclude { stage: "deploy" }
}
layer {
- name: "loss1/classifier"
+ name: "loss1a/classifier"
type: "InnerProduct"
bottom: "loss1/fc"
- top: "loss1/classifier"
+ top: "loss1a/classifier"
param {
lr_mult: 1
decay_mult: 1
@@ -930,7 +930,7 @@
decay_mult: 0
}
inner_product_param {
- num_output: 1000
+ num_output: 2
weight_filler {
type: "xavier"
std: 0.0009765625
@@ -945,7 +945,7 @@
layer {
name: "loss1/loss"
type: "SoftmaxWithLoss"
- bottom: "loss1/classifier"
+ bottom: "loss1a/classifier"
bottom: "label"
top: "loss1/loss"
loss_weight: 0.3
@@ -954,7 +954,7 @@
layer {
name: "loss1/top-1"
type: "Accuracy"
- bottom: "loss1/classifier"
+ bottom: "loss1a/classifier"
bottom: "label"
top: "loss1/accuracy"
include { stage: "val" }
@@ -962,7 +962,7 @@
layer {
name: "loss1/top-5"
type: "Accuracy"
- bottom: "loss1/classifier"
+ bottom: "loss1a/classifier"
bottom: "label"
top: "loss1/accuracy-top5"
include { stage: "val" }
@@ -1705,10 +1705,10 @@
exclude { stage: "deploy" }
}
layer {
- name: "loss2/classifier"
+ name: "loss2a/classifier"
type: "InnerProduct"
bottom: "loss2/fc"
- top: "loss2/classifier"
+ top: "loss2a/classifier"
param {
lr_mult: 1
decay_mult: 1
@@ -1718,7 +1718,7 @@
decay_mult: 0
}
inner_product_param {
- num_output: 1000
+ num_output: 2
weight_filler {
type: "xavier"
std: 0.0009765625
@@ -1733,7 +1733,7 @@
layer {
name: "loss2/loss"
type: "SoftmaxWithLoss"
- bottom: "loss2/classifier"
+ bottom: "loss2a/classifier"
bottom: "label"
top: "loss2/loss"
loss_weight: 0.3
@@ -1742,7 +1742,7 @@
layer {
name: "loss2/top-1"
type: "Accuracy"
- bottom: "loss2/classifier"
+ bottom: "loss2a/classifier"
bottom: "label"
top: "loss2/accuracy"
include { stage: "val" }
@@ -1750,7 +1750,7 @@
layer {
name: "loss2/top-5"
type: "Accuracy"
- bottom: "loss2/classifier"
+ bottom: "loss2a/classifier"
bottom: "label"
top: "loss2/accuracy-top5"
include { stage: "val" }
@@ -2435,10 +2435,10 @@
}
}
layer {
- name: "loss3/classifier"
+ name: "loss3a/classifier"
type: "InnerProduct"
bottom: "pool5/7x7_s1"
- top: "loss3/classifier"
+ top: "loss3a/classifier"
param {
lr_mult: 1
decay_mult: 1
@@ -2448,7 +2448,7 @@
decay_mult: 0
}
inner_product_param {
- num_output: 1000
+ num_output: 2
weight_filler {
type: "xavier"
}
@@ -2461,7 +2461,7 @@
layer {
name: "loss3/loss"
type: "SoftmaxWithLoss"
- bottom: "loss3/classifier"
+ bottom: "loss3a/classifier"
bottom: "label"
top: "loss"
loss_weight: 1
@@ -2470,7 +2470,7 @@
layer {
name: "loss3/top-1"
type: "Accuracy"
- bottom: "loss3/classifier"
+ bottom: "loss3a/classifier"
bottom: "label"
top: "accuracy"
include { stage: "val" }
@@ -2478,7 +2478,7 @@
layer {
name: "loss3/top-5"
type: "Accuracy"
- bottom: "loss3/classifier"
+ bottom: "loss3a/classifier"
bottom: "label"
top: "accuracy-top5"
include { stage: "val" }
@@ -2489,7 +2489,7 @@
layer {
name: "softmax"
type: "Softmax"
- bottom: "loss3/classifier"
+ bottom: "loss3a/classifier"
top: "softmax"
include { stage: "deploy" }
}
I’ve put the complete file in src/googlenet-customized.prototxt.
Q: "What about changes to the prototext definitions of these networks? We changed the fully connected layer name(s), and the number of categories. What else could, or should be changed, and in what circumstances?"
Great question, and it's something I'm wondering, too. For example, I know that we can "fix" certain layers so the weights don't change. Doing other things involves understanding how the layers work, which is beyond this guide, and also beyond its author at present!
Like we did with fine tuning AlexNet, we also reduce the learning rate by 10% from 0.01
to 0.001
.
Q: "What other changes would make sense when fine tuning these networks? What about different numbers of epochs, batch sizes, solver types (Adam, AdaDelta, AdaGrad, etc), learning rates, policies (Exponential Decay, Inverse Decay, Sigmoid Decay, etc), step sizes, and gamma values?"
Great question, and one that I wonder about as well. I only have a vague understanding of these and it’s likely that there are improvements we can make if you know how to alter these values when training. This is something that needs better documentation.
Because GoogLeNet has a more complicated architecture than AlexNet, fine tuning it requires more time. On my laptop, it takes 10 minutes to retrain GoogLeNet with our dataset, achieving 100% accuracy and a loss of 0.0070:
Just as we saw with the fine tuned version of AlexNet, our modified GoogLeNet performs amazing well--the best so far:
With our network trained and tested, it’s time to download and use it. Each of the models we trained in DIGITS has a Download Model button, as well as a way to select different snapshots within our training run (e.g., Epoch #30
):
Clicking Download Model downloads a tar.gz
archive containing the following files:
deploy.prototxt
mean.binaryproto
solver.prototxt
info.json
original.prototxt
labels.txt
snapshot_iter_90.caffemodel
train_val.prototxt
There’s a nice description in the Caffe documentation about how to use the model we just built. It says:
A network is defined by its design (.prototxt), and its weights (.caffemodel). As a network is being trained, the current state of that network's weights are stored in a .caffemodel. With both of these we can move from the train/test phase into the production phase.
In its current state, the design of the network is not designed for deployment. Before we can release our network as a product, we often need to alter it in a few ways:
- Remove the data layer that was used for training, as for in the case of classification we are no longer providing labels for our data.
- Remove any layer that is dependent upon data labels.
- Set the network up to accept data.
- Have the network output the result.
DIGITS has already done the work for us, separating out the different versions of our prototxt
files. The files we’ll care about when using this network are:
deploy.prototxt
- the definition of our network, ready for accepting image input datamean.binaryproto
- our model will need us to subtract the image mean from each image that it processes, and this is the mean image.labels.txt
- a list of our labels (dolphin
, seahorse
) in case we want to print them vs. just the category numbersnapshot_iter_90.caffemodel
- these are the trained weights for our networkWe can use these files in a number of ways to classify new images. For example, in our CAFFE_ROOT
we can use build/examples/cpp_classification/classification.bin
to classify one image:
$ cd $CAFFE_ROOT/build/examples/cpp_classification
$ ./classification.bin deploy.prototxt snapshot_iter_90.caffemodel mean.binaryproto labels.txt dolphin1.jpg
This will spit out a bunch of debug text, followed by the predictions for each of our two categories:
0.9997 - “dolphin”
0.0003 - “seahorse”
You can read the complete C++ source for this in the Caffe examples.
For a classification version that uses the Python interface, DIGITS includes a nice example. There's also a fairly well documented Python walkthrough in the Caffe examples.
Let's write a program that uses our fine-tuned GoogLeNet model to classify the untrained images we have in data/untrained-samples. I've cobbled this together based on the examples above, as well as the caffe
Python module's source, which you should prefer to anything I'm about to say.
A full version of what I'm going to discuss is available in src/classify-samples.py. Let's begin!
First, we'll need the NumPy module. In a moment we'll be using NumPy to work with ndarray
s, which Caffe uses a lot. If you haven't used them before, as I had not, you'd do well to begin by reading this Quickstart tutorial.
Second, we'll need to load the caffe
module from our CAFFE_ROOT
dir. If it's not already included in your Python environment, you can force it to load by adding it manually. Along with it we'll also import caffe's protobuf module:
import numpy as np
caffe_root = '/path/to/your/caffe_root'
sys.path.insert(0, os.path.join(caffe_root, 'python'))
import caffe
from caffe.proto import caffe_pb2
Next we need to tell Caffe whether to use the CPU or GPU. For our experiments, the CPU is fine:
caffe.set_mode_cpu()
Now we can use caffe
to load our trained network. To do so, we'll need some of the files we downloaded from DIGITS, namely:
deploy.prototxt
- our "network file", the description of the network.snapshot_iter_90.caffemodel
- our trained "weights"We obviously need to provide the full path, and I'll assume that my files are in a dir called model/
:
model_dir = 'model'
deploy_file = os.path.join(model_dir, 'deploy.prototxt')
weights_file = os.path.join(model_dir, 'snapshot_iter_90.caffemodel')
net = caffe.Net(deploy_file, caffe.TEST, weights=weights_file)
The caffe.Net()
constructor takes a network file, a phase (caffe.TEST
or caffe.TRAIN
), as well as an optional weights filename. When we provide a weights file, the Net
will automatically load them for us. The Net
has a number of methods and attributes you can use.
Note: There is also a deprecated version of this constructor, which seems to get used often in sample code on the web. It looks like this, in case you encounter it:
net = caffe.Net(str(deploy_file), str(model_file), caffe.TEST)
We're interested in loading images of various sizes into our network for testing. As a result, we'll need to transform them into a shape that our network can use (i.e., colour, 256x256). Caffe provides the Transformer
class for this purpose. We'll use it to create a transformation appropriate for our images/network:
transformer = caffe.io.Transformer({'data': net.blobs['data'].data.shape})
# set_transpose: https://github.com/BVLC/caffe/blob/61944afd4e948a4e2b4ef553919a886a8a8b8246/python/caffe/io.py#L187
transformer.set_transpose('data', (2, 0, 1))
# set_raw_scale: https://github.com/BVLC/caffe/blob/61944afd4e948a4e2b4ef553919a886a8a8b8246/python/caffe/io.py#L221
transformer.set_raw_scale('data', 255)
# set_channel_swap: https://github.com/BVLC/caffe/blob/61944afd4e948a4e2b4ef553919a886a8a8b8246/python/caffe/io.py#L203
transformer.set_channel_swap('data', (2, 1, 0))
We can also use the mean.binaryproto
file DIGITS gave us to set our transformer's mean:
# This code for setting the mean from https://github.com/NVIDIA/DIGITS/tree/master/examples/classification
mean_file = os.path.join(model_dir, 'mean.binaryproto')
with open(mean_file, 'rb') as infile:
blob = caffe_pb2.BlobProto()
blob.MergeFromString(infile.read())
if blob.HasField('shape'):
blob_dims = blob.shape
assert len(blob_dims) == 4, 'Shape should have 4 dimensions - shape is %s' % blob.shape
elif blob.HasField('num') and blob.HasField('channels') and \
blob.HasField('height') and blob.HasField('width'):
blob_dims = (blob.num, blob.channels, blob.height, blob.width)
else:
raise ValueError('blob does not provide shape or 4d dimensions')
pixel = np.reshape(blob.data, blob_dims[1:]).mean(1).mean(1)
transformer.set_mean('data', pixel)
If we had a lot of labels, we might also choose to read in our labels file, which we can use later by looking up the label for a probability using its position (e.g., 0=dolphin, 1=seahorse):
labels_file = os.path.join(model_dir, 'labels.txt')
labels = np.loadtxt(labels_file, str, delimiter='\n')
Now we're ready to classify an image. We'll use caffe.io.load_image()
to read our image file, then use our transformer to reshape it and set it as our network's data layer:
# Load the image from disk using caffe's built-in I/O module
image = caffe.io.load_image(fullpath)
# Preprocess the image into the proper format for feeding into the model
net.blobs['data'].data[...] = transformer.preprocess('data', image)
Q: "How could I use images (i.e., frames) from a camera or video stream instead of files?"
Great question, here's a skeleton to get you started:
import cv2
...
# Get the shape of our input data layer, so we can resize the image
input_shape = net.blobs['data'].data.shape
...
webCamCap = cv2.VideoCapture(0) # could also be a URL, filename
if webCamCap.isOpened():
rval, frame = webCamCap.read()
else:
rval = False
while rval:
rval, frame = webCamCap.read()
net.blobs['data'].data[...] = transformer.preprocess('data', frame)
...
webCamCap.release()
Back to our problem, we next need to run the image data through our network and read out the probabilities from our network's final 'softmax'
layer, which will be in order by label category:
# Run the image's pixel data through the network
out = net.forward()
# Extract the probabilities of our two categories from the final layer
softmax_layer = out['softmax']
# Here we're converting to Python types from ndarray floats
dolphin_prob = softmax_layer.item(0)
seahorse_prob = softmax_layer.item(1)
# Print the results. I'm using labels just to show how it's done
label = labels[0] if dolphin_prob > seahorse_prob else labels[1]
filename = os.path.basename(fullpath)
print '%s is a %s dolphin=%.3f%% seahorse=%.3f%%' % (filename, label, dolphin_prob*100, seahorse_prob*100)
Running the full version of this (see src/classify-samples.py) using our fine-tuned GoogLeNet network on our data/untrained-samples images gives me the following output:
[...truncated caffe network output...]
dolphin1.jpg is a dolphin dolphin=99.968% seahorse=0.032%
dolphin2.jpg is a dolphin dolphin=99.997% seahorse=0.003%
dolphin3.jpg is a dolphin dolphin=99.943% seahorse=0.057%
seahorse1.jpg is a seahorse dolphin=0.365% seahorse=99.635%
seahorse2.jpg is a seahorse dolphin=0.000% seahorse=100.000%
seahorse3.jpg is a seahorse dolphin=0.014% seahorse=99.986%
I'm still trying to learn all the best practices for working with models in code. I wish I had more and better documented code examples, APIs, premade modules, etc to show you here. To be honest, most of the code examples I’ve found are terse, and poorly documented--Caffe’s documentation is spotty, and assumes a lot.
It seems to me like there’s an opportunity for someone to build higher-level tools on top of the Caffe interfaces for beginners and basic workflows like we've done here. It would be great if there were more simple modules in high-level languages that I could point you at that “did the right thing” with our model; someone could/should take this on, and make using Caffe models as easy as DIGITS makes training them. I’d love to have something I could use in node.js, for example. Ideally one shouldn’t be required to know so much about the internals of the model or Caffe. I haven’t used it yet, but DeepDetect looks interesting on this front, and there are likely many other tools I don’t know about.
At the beginning we said that our goal was to write a program that used a neural network to correctly classify all of the images in data/untrained-samples. These are images of dolphins and seahorses that were never used in the training or validation data:
Let's look at how each of our three attempts did with this challenge:
Image | Dolphin | Seahorse | Result |
---|---|---|---|
dolphin1.jpg | 71.11% | 28.89% | 😑 |
dolphin2.jpg | 99.2% | 0.8% | 😎 |
dolphin3.jpg | 63.3% | 36.7% | 😕 |
seahorse1.jpg | 95.04% | 4.96% | 😞 |
seahorse2.jpg | 56.64% | 43.36 | 😕 |
seahorse3.jpg | 7.06% | 92.94% | 😁 |
Image | Dolphin | Seahorse | Result |
---|---|---|---|
dolphin1.jpg | 99.1% | 0.09% | 😎 |
dolphin2.jpg | 99.5% | 0.05% | 😎 |
dolphin3.jpg | 91.48% | 8.52% | 😁 |
seahorse1.jpg | 0% | 100% | 😎 |
seahorse2.jpg | 0% | 100% | 😎 |
seahorse3.jpg | 0% | 100% | 😎 |
Image | Dolphin | Seahorse | Result |
---|---|---|---|
dolphin1.jpg | 99.86% | 0.14% | 😎 |
dolphin2.jpg | 100% | 0% | 😎 |
dolphin3.jpg | 100% | 0% | 😎 |
seahorse1.jpg | 0.5% | 99.5% | 😎 |
seahorse2.jpg | 0% | 100% | 😎 |
seahorse3.jpg | 0.02% | 99.98% | 😎 |
It’s amazing how well our model works, and what’s possible by fine tuning a pretrained network. Obviously our dolphin vs. seahorse example is contrived, and the dataset overly limited--we really do want more and better data if we want our network to be robust. But since our goal was to examine the tools and workflows of neural networks, it’s turned out to be an ideal case, especially since it didn’t require expensive equipment or massive amounts of time.
Above all I hope that this experience helps to remove the overwhelming fear of getting started. Deciding whether or not it’s worth investing time in learning the theories of machine learning and neural networks is easier when you’ve been able to see it work in a small way. Now that you’ve got a setup and a working approach, you can try doing other sorts of classifications. You might also look at the other types of things you can do with Caffe and DIGITS, for example, finding objects within an image, or doing segmentation.
Have fun with machine learning!
Also available in Chinese (Traditional).
Also available in Korean.
Author: Humphd
Source Code: https://github.com/humphd/have-fun-with-machine-learning
License: View license
1672353480
This tutorial presents Ansible step-by-step. You'll need to have a (virtual or physical) machine to act as an Ansible node. A Vagrant environment is provided for going through this tutorial.
Ansible is a configuration management software that lets you control and configure nodes from another machine. What makes it different from other management software is that Ansible uses (potentially existing) SSH infrastructure, while others (Chef, Puppet, ...) need a specific PKI infrastructure to be set up.
Ansible also emphasizes push mode, where configuration is pushed from a master machine (a master machine is only a machine where you can SSH to nodes from) to nodes, while most other CM typically do it the other way around (nodes pull their config at times from a master machine).
This mode is really interesting since you do not need to have a 'publicly' accessible 'master' to be able to configure remote nodes: it's the nodes that need to be accessible (we'll see later that 'hidden' nodes can pull their configuration too!), and most of the time they are.
This tutorial has been tested with Ansible 2.9.
We're also assuming you have a keypair in your ~/.ssh directory.
vagrant up
The reference is the installation guide, but I strongly recommend the Using pip & virtualenv (higly recommended !) method.
The best way to install Ansible (by far) is to use pip
and virtual environments.
Using virtualenv will let you have multiple Ansible versions installed side by side, and test upgrades or use different versions in different projects. Also, by using a virtualenv, you won't pollute your system's python installation.
Check virtualenvwrapper for this. It makes managing virtualenvs very easy.
Under Ubuntu, installing virtualenv & virtualenvwrapper can be done like so:
sudo apt install python3-virtualenv virtualenvwrapper python3-pip
exec $SHELL
You can then create a virtualenv:
mkvirtualenv ansible-tuto
workon ansible-tuto
(mkvirtualenv
usually switches you automatically to your newly created virtualenv, so here workon ansible-tuto
is not strictly necessary, but lets be safe).
Then, install ansible via pip
:
pip install ansible==2.7.1
(or use whatever version you want).
When you're done, you can deactivate your virtualenv to return to your system's python settings & modules:
deactivate
If you later want to return to your virtualenv:
workon ansible-tuto
Use lsvirtualenv
to list all your virtual environments.
Ansible devel branch is always usable, so we'll run straight from a git checkout. You might need to install git for this (sudo apt-get install git
on Debian/Ubuntu).
git clone git://github.com/ansible/ansible.git
cd ./ansible
At this point, we can load the Ansible environment:
source ./hacking/env-setup
sudo apt-get install ansible
When running from an distribution package, this is absolutely not necessary. If you prefer running from an up to date Debian package, Ansible provides a make target
to build it. You need a few packages to build the deb and few dependencies:
sudo apt-get install make fakeroot cdbs python-support python-yaml python-jinja2 python-paramiko python-crypto python-pip
git clone git://github.com/ansible/ansible.git
cd ./ansible
make deb
sudo dpkg -i ../ansible_x.y_all.deb (version may vary)
git clone https://github.com/leucos/ansible-tuto.git
cd ansible-tuto
You can run the tutorials here interactively including a very simple setup with docker.
Check this repository for details.
It's highly recommended to use Vagrant to follow this tutorial. If you don't have it already, setting up should be quite easy and is described in step-00/README.md.
If you wish to proceed without Vagrant (not recommended!), go straight to step-01/README.md.
Just in case you want to skip to a specific step, here is a topic table of contents.
Thanks to all people who have contributed to this tutorial:
(and sorry if I forgot anyone)
I've been using Ansible almost since its birth, but I learned a lot in the process of writing it. If you want to jump in, it's a great way to learn, feel free to add your contributions.
The chapters being written live in the writing branch.
If you have ideas on topics that would require a chapter, please open a PR.
I'm also open on pairing for writing chapters. Drop me a note if you're interested.
If you make changes or add chapters, please fill the test/expectations
file and run the tests (test/run.sh
). See the test/run.sh
file for (a bit) more information.
When adding a new chapter (e.g. step-NN
), please issue:
cd step-99
ln -sf ../step-NN/{hosts,roles,site.yml,group_vars,host_vars} .
For typos, grammar, etc... please send a PR for the master branch directly.
Thank you!
Author: leucos
Source Code: https://github.com/leucos/ansible-tuto
License: View license
1671100841
JWT Authentication Tutorial With Express & MongoDB | Rest API Project | Node.js for Beginners #10
In this video we will continue to build our contact management Rest API project using Express & MongoDb. We will build user registration and login endpoints. We will see how to hash raw passwords and add authentication using JWT sign and verify access token along with protecting routes.
⭐️ Support my channel⭐️ https://www.buymeacoffee.com/dipeshmalvia
⭐️ GitHub link for Reference ⭐️ https://github.com/dmalvia/Express_MongoDB_Rest_API_Tutorial
⭐️ Node.js for beginners Playlist ⭐️ https://youtube.com/playlist?list=PLTP3E5bPW796_icZanMqhdg7i0Cl7Y51F
🔥 Video contents... ENJOY 👇
⭐️ JavaScript ⭐️
🔗 Social Medias 🔗
⭐️ Tags ⭐️ - Node.js, Express & MongoDB Project - Build Rest API Project Express & MongoDB - JWT & EXPRESS Authentication Crash Course - Express Project For Beginners
⭐️ Hashtags ⭐️ #nodejs #express #beginners #tutorial
Disclaimer: It doesn't feel good to have a disclaimer in every video but this is how the world is right now. All videos are for educational purpose and use them wisely. Any video may have a slight mistake, please take decisions based on your research. This video is not forcing anything on you.
https://youtu.be/ICMnoKxlYYg
1670502174
Build Rest Api Project With Express & MongoDB | CRUD API | Node.js Tutorial for Beginners #9
In this video we will continue to build our contact management Rest API project using Express & MongoDb. And we will implement project wide error handling, MongoDB setup and CRUD operations of our contacts resource.
⭐️ Support my channel⭐️ https://www.buymeacoffee.com/dipeshmalvia
⭐️ GitHub link for Reference ⭐️ https://github.com/dmalvia/Express_MongoDB_Rest_API_Tutorial
⭐️ Node.js for beginners Playlist ⭐️ https://youtube.com/playlist?list=PLTP3E5bPW796_icZanMqhdg7i0Cl7Y51F
🔥 Video contents... ENJOY 👇
⭐️ JavaScript ⭐️
🔗 Social Medias 🔗
⭐️ Tags ⭐️ - Node.js, Express & MongoDB Project - Build Rest API Project Express & MongoDB - Express CRUD API Tutorial - Node.Js & Express Crash Course
⭐️ Hashtags ⭐️ #nodejs #express #beginners #tutorial
Disclaimer: It doesn't feel good to have a disclaimer in every video but this is how the world is right now. All videos are for educational purpose and use them wisely. Any video may have a slight mistake, please take decisions based on your research. This video is not forcing anything on you.
https://youtu.be/niw5KSO94YI
1670441700
Smart homes are the future, but what do you do if you have an old air conditioner or heater in your home? Replacing old devices isn’t always feasible, but you can automate them with a Raspberry Pi.
The air conditioning in many homes lacks modern niceties like central automation, programmable thermostats, multiple sensors, or Wi-Fi control. But older air-conditioning tech is still reliable, so in many cases, it’s unlikely to be upgraded soon.
That, however, requires users to frequently interrupt work or sleep to turn an air conditioner on or off. This is particularly true in houses with tight layouts, like mine:
My unorthodox floor plan makes cooling with a single in-window air conditioning unit a challenge. There is no direct line of sight for remote control from the bedroom and no direct path for cool air to reach all the rooms.
US homes commonly have central air conditioning, but this isn’t the case globally. Not having central AC limits automation options, making it more difficult to achieve the same temperature throughout the whole home. In particular, it makes it hard to avoid temperature fluctuations that may require manual intervention to address.
As an engineer and Internet of Things (IoT) enthusiast, I saw an opportunity to do a few useful things at once:
My air conditioner is a basic device with a simple infrared remote control. I was aware of devices that enable air-conditioning units to be used with smart home systems, such as Sensibo or Tado. Instead, I took a DIY approach and created a Raspberry Pi thermostat, allowing for more sophisticated control based on sensor input from various rooms.
I was already using several Raspberry Pi Zero Ws, coupled with DHT22 sensor modules, to monitor the temperature and humidity in different rooms. Because of the segmented floor plan, I installed the sensors to monitor how warm it was in different parts of my house.
I also have a home surveillance system (not required for this project) on a Windows 10 PC with WSL 2. I wanted to integrate the sensor readings into the surveillance videos, as a text overlay on the video feed.
The sensors were straightforward to wire, having only three connections:
A wiring diagram for the DHT22 module, showing the pins used to connect it to the Raspberry Pi.
I used Raspberry Pi OS Lite, installing Python 3 with PiP and the Adafruit_DHT library for Python to read the sensor data. It’s technically deprecated but simpler to install and use. Plus, it requires fewer resources for our use case.
I also wanted to have a log of all the readings so I used a third-party server, ThingSpeak, to host my data and serve it via API calls. It’s relatively straightforward, and since I did not need real-time readings, I opted to send data every five minutes.
import requests
import time
import random
import Adafruit_DHT
KEY = 'api key'
def pushData(temp:float, hum:float):
'''Takes temp and humidity and pushes to ThingsSpeak'''
url = 'https://api.thingspeak.com/update'
params = {'api_key': KEY, 'field5': temp, 'field6': hum}
res = requests.get(url, params=params)
def getData(sensor:int, pin:int):
'''
Input DHT sensor type and RPi GPIO pin to collect a sample of data
Parameters:
sensor: Either 11 or 22, depending on sensor used (DHT11 or DHT22)
pin: GPIO pin used (e.g. 4)
'''
try:
humidity, temperature = Adafruit_DHT.read_retry(sensor, pin)
return humidity, temperature
except:
Exception("Error reading sensor data")
return False
if __name__ == "__main__":
sensor = 22 # Change to 11 if using DHT11
pin = 4 # I used GPIO pin 4
while True:
h, t = getData(sensor, pin)
pushData(t, h)
time.sleep(300)
On my dedicated surveillance PC, running WSL 2, I set up a PHP script that fetches the data from ThingSpeak, formats it, and writes it in a simple .txt
file. This .txt
file is needed for my surveillance software to overlay it on top of the video stream.
Because I had some automation in the house already, including smart light bulbs and several routines in Google Home, it followed that I would use the sensor data as a smart thermostat in Google Home. My plan was to create a Google Home routine that would turn the air conditioning on or off automatically based on room temperature, without the need for user input.
The PNI SafeHome PT11IR Wi-Fi smart remote control unit.
Pricier all-in-one solutions like those from Sensibo and Tado require less technical setup, but for a fraction of the cost, the PNI SafeHome PT11IR enabled me to use my phone to control any number of infrared devices within its range. The control app, Tuya, integrates with Google Home.
With a smart-enabled air conditioner and sensor data available, I tried to get the Raspberry recognized as a thermostat in Google Home but to no avail. I was able to send the sensor data to Google IoT Cloud and its Pub/Sub service, but there was no way to send it to Google Home to create a routine based on that data.
After pondering this for a few days, I thought of a new approach. What if I didn’t need to send the data to Google Home? What if I could check the data locally and send a command to Google Home to turn the air conditioner on or off? I tested voice commands with success, so this approach seemed promising.
A quick search turned up Assistant Relay, a Node.js-powered system that enables a user to send commands to Google Assistant, allowing the user to tie anything to Google Assistant as long as it knows what to do with the input it receives.
Even better, with Assistant Relay, I could end commands to my Google Assistant by simply sending POST requests to the device running the Node.js server (in this case, my Raspberry Pi Zero W) with some required parameters. That’s it. The script is well documented so I won’t get into much detail here.
Since the sensor data was already being read on the surveillance PC, I figured I could integrate the request into the PHP script to keep things in one place.
Since you likely don’t have the .txt
file requirement, you can simplify the process by directly reading the sensor data and issuing commands based on that data to the Google Assistant Service, via Assistant Relay. All of this can be done from a single Raspberry Pi device, without the need for additional hardware. However, as I already had completed half of the work, it made sense to use what I had. Both scripts in this article can be used on a single machine; furthermore, the PHP script can be rewritten in Python, if needed.
I wanted the automatic power cycling to happen only during nighttime, so I defined the hours for which I wanted to automate operation—10 PM to 7 AM—and set the preferred temperature. Identifying the correct temperature intervals—to achieve a comfortable range without shortening the life span of the air-conditioning unit by cycling its power too often—required a few tries to get it right.
The PHP script that created the sensor data overlay was set up to run every five minutes via a cron job, so the only things I added to it were the conditions and the POST request.
However, this created an issue. If the conditions were met, the script would send a “turn on” command every five minutes, even if the air conditioning was already on. This caused the unit to beep annoyingly, even on the “turn off” command. To fix this, I needed a way to read the current status of the unit.
Elegance wasn’t a priority, so I made a JSON file containing an array. Whenever the “turn on” or “turn off” commands would complete successfully, the script would then append the last status to this array. This solved redundancy; however, particularly hot days or excessive heating during the winter could cause the conditions to be met again. I decided a manual override would suffice in these situations. I’ll leave adding a return before the switch snippet to this end as an exercise for the reader:
<?php
switch(true)
{
case $temperature > 27:
turnAc('on');
break;
case $temperature < 24:
turnAc('off');
break;
}
function turnAc($status)
{
$command = 'turn on hallway ac'; // hallway ac is the Google Home device name for my AC
if ($status == 'off')
{
$command = 'turn off hallway ac';
}
if ($status == 'on' && checkAc() == 'on')
{
return;
}
if ($status == 'off' && checkAc() == 'off')
{
return;
}
$curl = curl_init();
curl_setopt_array($curl, array(
CURLOPT_URL => 'local assistant server ip',
CURLOPT_RETURNTRANSFER => true,
CURLOPT_ENCODING => '',
CURLOPT_MAXREDIRS => 10,
CURLOPT_TIMEOUT => 0,
CURLOPT_FOLLOWLOCATION => true,
CURLOPT_HTTP_VERSION => CURL_HTTP_VERSION_1_1,
CURLOPT_CUSTOMREQUEST => 'POST',
CURLOPT_POSTFIELDS =>'{
"command": '.$command.',
"converse": false,
"user": "designated user"
}',
CURLOPT_HTTPHEADER => array(
'Content-Type: application/json'
),
));
$response = curl_exec($curl);
curl_close($curl);
$obj = null;
try {
$obj = json_decode($response);
} catch (Exception $e) {
}
if (!$obj || $obj->success != true)
{
markAc($status == 'on' ? 'off' : 'on'); // if error, mark it as opposite status
return;
}
markAc($status);
}
function markAc($status)
{
$file = __DIR__ . "/markAc.json";
$json = json_decode(file_get_contents($file), true);
$json[] = array(date('F j, Y H:i:s'), $status);
$handler = fopen($file, "w") or die("Unable to open file!");
$txt = json_encode($json);
fwrite($handler, $txt);
fclose($handler);
}
function checkAc()
{
$file = __DIR__ . "/markAc.json";
$json = json_decode(file_get_contents($file), true);
$end = array_pop($json);
return $end[1];
}
This worked but not on the first attempt. I had to figure out things along the way and tweak them as needed. Hopefully, with the benefit of my experience, you won’t need to do as much to get it right the first time.
I was motivated to automate my air conditioning because the unconventional layout of my home sometimes resulted in vastly different temperatures in different rooms. But automating heating and cooling has benefits even for those who don’t face this particular issue.
People across the world live in various climates and pay different prices for energy (and different rates at different times of the day), so even modest improvements in energy efficiency can make automation worthwhile in certain regions.
Furthermore, as more and more homes become automated, there is reason to explore the potential of automating older power-hungry devices and appliances such as air conditioners, electric heaters, and water heaters. Because these devices are typically bulky, difficult to install, and expensive to upgrade, many people will be stuck with them for years to come. Making these “dumb” devices a bit smarter can not only improve comfort and energy efficiency but also extend their life spans.
Original article source at: https://www.toptal.com/
1670357880
Ever wonder how game developers deliver entertaining interplay with the non-player characters they create? Learn how to develop them yourself in our finite-state machine tutorial.
In the competitive world of gaming, developers strive to offer an entertaining user experience for those who interact with the non-player characters (NPCs) that we create. Developers can deliver this interactivity by using finite-state machines (FSMs) to create AI solutions that simulate intelligence in our NPCs.
AI trends have shifted to behavioral trees, but FSMs remain relevant. They’re incorporated—in one capacity or another—into virtually every electronic game.
An FSM is a model of computation in which only one of a finite number of hypothetical states can be active at one time. An FSM transitions from one state to another, responding to conditions or inputs. Its core components include:
Component | Description |
---|---|
State | One of a finite set of options indicating the current overall condition of an FSM; any given state includes an associated set of actions |
Action | What a state does when the FSM queries it |
Decision | The logic establishing when a transition takes place |
Transition | The process of changing states |
While we will focus on FSMs from the perspective of AI implementation, concepts such as animation state machines and general game states also fall under the FSM umbrella.
Let’s consider the example of the classic arcade game Pac-Man. In the game’s initial state (the “chase” state), the NPCs are colorful ghosts that pursue and eventually outpace the player. The ghosts transition into the evade state whenever the player eats a power pellet and experiences a power-up, gaining the ability to eat the ghosts. The ghosts, now blue in color, evade the player until the power-up times out and the ghosts transition back to the chase state, in which their original behaviors and colors are restored.
A Pac-Man ghost is always in one of two states: chase or evade. Naturally, we must provide two transitions—one from chase to evade, the other from evade to chase:
Transitions Between Pac-Man Ghost States
The finite-state machine, by design, queries the current state, which queries the decision(s) and action(s) of that state. The following diagram represents our Pac-Man example and shows a decision that checks the status of the player’s power-up. If a power-up has begun, the NPCs transition from chase to evade. If a power-up has ended, the NPCs transition from evade to chase. Finally, if there is no power-up change, no transition occurs.
Components of the Pac-Man Ghost FSM
FSMs free us to build modular AI. For instance, with just a single new action, we can create an NPC with a new behavior. Thus, we can ascribe a new action—the eating of a power pellet—to one of our Pac-Man ghosts, giving it the ability to eat power pellets while evading the player. We can reuse existing actions, decisions, and transitions to support this behavior.
Since the resources required to develop a unique NPC are minimal, we are well positioned to meet the evolving project requirements of multiple unique NPCs. On the other hand, an excessive number of states and transitions can get us tangled up in a spaghetti-state machine—an FSM whose overabundance of connections makes it difficult to debug and maintain.
To demonstrate how to implement a finite-state machine in Unity, let’s create a simple stealth game. Our architecture will incorporate ScriptableObject
s, which are data containers that can store and share information throughout the application, so that we do not need to reproduce it. ScriptableObject
s are capable of limited processing, such as invoking actions and querying decisions. In addition to Unity’s official documentation, the older Game Architecture with Scriptable Objects talk remains an excellent resource if you want to dive deeper.
Before we add AI to this initial ready-to-compile project, consider the proposed architecture:
Proposed FSM Architecture
In our sample game, the enemy (an NPC represented by a blue capsule) patrols. When the enemy sees the player (represented by a gray capsule), the enemy starts following the player:
Core Components of Our Sample Stealth Game FSM
In contrast with Pac-Man, the enemy in our game will not return to the default state (“patrol”) once it follows the player.
Let’s begin by creating our classes. In a new scripts
folder, we will add all of the proposed architectural building blocks as C# scripts.
BaseStateMachine
ClassThe BaseStateMachine
class is the only MonoBehavior
that we will add to access our AI-enabled NPCs. For simplicity’s sake, our BaseStateMachine
will be bare-bones. If we wanted to, however, we could add an inherited custom FSM that stores additional parameters and references to additional components. Note that the code will not compile properly until we have added our BaseState
class, which we’ll do later in our tutorial.
The code for BaseStateMachine
refers to and executes the current state to perform the actions and see if a transition is warranted:
using UnityEngine;
namespace Demo.FSM
{
public class BaseStateMachine : MonoBehaviour
{
[SerializeField] private BaseState _initialState;
private void Awake()
{
CurrentState = _initialState;
}
public BaseState CurrentState { get; set; }
private void Update()
{
CurrentState.Execute(this);
}
}
}
BaseState
ClassOur state is of the type BaseState
, which we derive from a ScriptableObject
. BaseState
includes a single method, Execute
, taking BaseStateMachine
as its argument and passing to it actions and transitions. This is how BaseState
looks:
using UnityEngine;
namespace Demo.FSM
{
public class BaseState : ScriptableObject
{
public virtual void Execute(BaseStateMachine machine) { }
}
}
State
and RemainInState
ClassesWe now derive two classes from BaseState
. First, we have the State
class, which stores references to actions and transitions, includes two lists (one for actions, the other for transitions), and overrides and calls the base Execute
on actions and transitions:
using System.Collections.Generic;
using UnityEngine;
namespace Demo.FSM
{
[CreateAssetMenu(menuName = "FSM/State")]
public sealed class State : BaseState
{
public List<FSMAction> Action = new List<FSMAction>();
public List<Transition> Transitions = new List<Transition>();
public override void Execute(BaseStateMachine machine)
{
foreach (var action in Action)
action.Execute(machine);
foreach(var transition in Transitions)
transition.Execute(machine);
}
}
}
Second, we have the RemainInState
class, which tells the FSM when not to perform a transition:
using UnityEngine;
namespace Demo.FSM
{
[CreateAssetMenu(menuName = "FSM/Remain In State", fileName = "RemainInState")]
public sealed class RemainInState : BaseState
{
}
}
Note that these classes will not compile until we have added the FSMAction
, Decision
, and Transition
classes.
FSMAction
ClassIn the Proposed FSM Architecture diagram, the base FSMAction
class is labeled “Action.” However, we will create the base FSMAction
class and use the name FSMAction
(since Action
is already in use by the .NET System
namespace).
FSMAction
, a ScriptableObject
, cannot process functions independently, so we will define it as an abstract class. As our development progresses, we may require a single action to serve more than one state. Fortunately, we can associate FSMAction
with as many states from as many FSMs as we wish.
The FSMAction
abstract class looks like this:
using UnityEngine;
namespace Demo.FSM
{
public abstract class FSMAction : ScriptableObject
{
public abstract void Execute(BaseStateMachine stateMachine);
}
}
Decision
and Transition
ClassesTo finish up our FSM, we will define two more classes. First, we have Decision
, an abstract class from which all other decisions would define their custom behavior:
using UnityEngine;
namespace Demo.FSM
{
public abstract class Decision : ScriptableObject
{
public abstract bool Decide(BaseStateMachine state);
}
}
The second class, Transition
, contains the Decision
object and two states:
Decision
yields true.Decision
yields false.It looks like this:
using UnityEngine;
namespace Demo.FSM
{
[CreateAssetMenu(menuName = "FSM/Transition")]
public sealed class Transition : ScriptableObject
{
public Decision Decision;
public BaseState TrueState;
public BaseState FalseState;
public void Execute(BaseStateMachine stateMachine)
{
if(Decision.Decide(stateMachine) && !(TrueState is RemainInState))
stateMachine.CurrentState = TrueState;
else if(!(FalseState is RemainInState))
stateMachine.CurrentState = FalseState;
}
}
}
Everything we have built up to this point should compile without any errors. If you experience issues, check your Unity Editor version, which can cause errors if out of date. Ensure that all files have been properly cloned from the original project folder and that all publicly accessed variables are not declared private.
Now, with the heavy lifting done, we are ready to implement custom actions and decisions in a new scripts
folder.
Patrol
and Chase
ClassesWhen we analyze the Core Components of Our Sample Stealth Game FSM diagram, we see that our NPC can be in one of two states:
We can reuse our existing transition implementation via Unity’s GUI, as we’ll discuss later. This leaves two actions (PatrolAction
and ChaseAction
) and a decision for us to code.
The patrol state action (which derives from the base FSMAction
) overrides the Execute
method to get two components:
PatrolPoints
, which tracks patrol points.NavMeshAgent
, Unity’s implementation for navigation in 3D space.The override then checks whether the AI agent has reached its destination and, if so, moves to the next destination. It looks like this:
using Demo.Enemy;
using Demo.FSM;
using UnityEngine;
using UnityEngine.AI;
namespace Demo.MyFSM
{
[CreateAssetMenu(menuName = "FSM/Actions/Patrol")]
public class PatrolAction : FSMAction
{
public override void Execute(BaseStateMachine stateMachine)
{
var navMeshAgent = stateMachine.GetComponent<NavMeshAgent>();
var patrolPoints = stateMachine.GetComponent<PatrolPoints>();
if (patrolPoints.HasReached(navMeshAgent))
navMeshAgent.SetDestination(patrolPoints.GetNext().position);
}
}
}
We may want to consider caching the PatrolPoints
and NavMeshAgent
components. Caching would allow us to share ScriptableObject
s for actions among agents without the performance impact of running GetComponent
on each query of the finite-state machine.
To be clear, we cannot cache component instances in the Execute
method. So instead, we’ll add a custom GetComponent
method to BaseStateMachine
. Our custom GetComponent
would cache the instance the first time it is called, returning the cached instance on consecutive calls. For reference, this is the implementation of BaseStateMachine
with caching:
using System;
using System.Collections.Generic;
using UnityEngine;
namespace Demo.FSM
{
public class BaseStateMachine : MonoBehaviour
{
[SerializeField] private BaseState _initialState;
private Dictionary<Type, Component> _cachedComponents;
private void Awake()
{
CurrentState = _initialState;
_cachedComponents = new Dictionary<Type, Component>();
}
public BaseState CurrentState { get; set; }
private void Update()
{
CurrentState.Execute(this);
}
public new T GetComponent<T>() where T : Component
{
if(_cachedComponents.ContainsKey(typeof(T)))
return _cachedComponents[typeof(T)] as T;
var component = base.GetComponent<T>();
if(component != null)
{
_cachedComponents.Add(typeof(T), component);
}
return component;
}
}
}
Like its counterpart PatrolAction
, the ChaseAction
class overrides the Execute
method to get PatrolPoints
and NavMeshAgent
components. In contrast, however, after checking whether the AI agent has reached its destination, the ChaseAction
class action sets the destination to Player.position
:
using Demo.Enemy;
using Demo.FSM;
using UnityEngine;
using UnityEngine.AI;
namespace Demo.MyFSM
{
[CreateAssetMenu(menuName = "FSM/Actions/Chase")]
public class ChaseAction : FSMAction
{
public override void Execute(BaseStateMachine stateMachine)
{
var navMeshAgent = stateMachine.GetComponent<NavMeshAgent>();
var enemySightSensor = stateMachine.GetComponent<EnemySightSensor>();
navMeshAgent.SetDestination(enemySightSensor.Player.position);
}
}
}
InLineOfSightDecision
ClassThe final piece is the InLineOfSightDecision
class, which inherits the base Decision
and gets the EnemySightSensor
component to check if the player is in the line of sight of the NPC:
using Demo.Enemy;
using Demo.FSM;
using UnityEngine;
namespace Demo.MyFSM
{
[CreateAssetMenu(menuName = "FSM/Decisions/In Line Of Sight")]
public class InLineOfSightDecision : Decision
{
public override bool Decide(BaseStateMachine stateMachine)
{
var enemyInLineOfSight = stateMachine.GetComponent<EnemySightSensor>();
return enemyInLineOfSight.Ping();
}
}
}
We are finally ready to attach behaviors to the Enemy
agent. These are created in the Unity Editor’s Project window.
Patrol
and Chase
StatesLet’s create two states and name them “Patrol” and “Chase”:
While here, let’s also create a RemainInState
object:
Now, it’s time to create the actions we just coded:
To code the Decision
:
To enable a transition from PatrolState
to ChaseState
, let’s first create the transition scriptable object:
We’ll populate the resulting inspector window as follows:
Filling Out the Spotted Enemy (Transition) Inspector Window
Then we’ll complete the Chase State inspector dialog as follows:
Filling Out the Chase State Inspector Window
Next, we’ll complete the Patrol State dialog:
Filling Out the Patrol State Inspector Window
Finally, we’ll add the BaseStateMachine
component to the enemy object: In the Unity Editor’s Project window, open the SampleScene asset, select the Enemy object from the Hierarchy panel, and, in the Inspector window, select Add Component > Base State Machine:
Adding the Base State Machine (Script) Component
For any issues, double-check that your game objects are configured correctly. For example, confirm that the Enemy object includes the PatrolPoints
script component and objects Point1
, Point2
, etc. This information can be lost with incorrect editor versioning.
Now you are ready to play the sample game and observe that the enemy will follow the player when the player steps into the enemy’s line of sight.
In this finite-state machine tutorial, we created a highly modular FSM-based AI (and corresponding GitHub repo) that we can reuse in future projects. Thanks to this modularity, we can always add power to our AI by introducing new components.
But our architecture also paves the way for graphical-first FSM design, which would elevate our developer experience to a new level of professionalism. We could then create FSMs for our games more rapidly—and with better creative accuracy.
Original article source at: https://www.toptal.com/
1670080860
In “Unity AI Development: A Finite-state Machine Tutorial,” we created a simple stealth game—a modular FSM-based AI. In the game, an enemy agent patrols the gamespace. When it spots the player, the enemy changes its state and follows the player instead of patrolling.
In this second leg of our Unity journey, we will build a graphical user interface (GUI) to create the core components of our finite-state machine (FSM) more rapidly, and with an improved developer experience.
The FSM detailed in the previous tutorial was built of architectural blocks as C# scripts. We added custom ScriptableObject
actions and decisions as classes. Our ScriptableObject
approach allowed us to have an easily maintainable and customizable FSM. In this tutorial, we replace our FSM’s drag-and-drop ScriptableObject
s with a graphical option.
In your game, if you’d like for the player to win more easily, replace the player detection script with this updated script that narrows the enemy’s field of vision.
We’ll build our graphical editor using xNode, a framework for node-based behavior trees that will display our FSM’s flow visually. Although Unity’s GraphView can accomplish the job, its API is both experimental and meagerly documented. xNode’s user interface delivers a superior developer experience, facilitating the prototyping and rapid expansion of our FSM.
Let’s add xNode to our project as a Git dependency using the Unity Package Manager:
https://github.com/siccity/xNode.git
in the unlabeled text box and click the Add button.Now we’re ready to dive deep and understand the key components of xNode:
Node class | Represents a node, a graph's most fundamental unit. In this xNode tutorial, we derive from the Node class new classes that declare nodes equipped with custom functionality and roles. |
NodeGraph class | Represents a collection of nodes (Node class instances) and the edges that connect them. In this xNode tutorial, we derive from NodeGraph a new class that manipulates and evaluates the nodes. |
NodePort class | Represents a communication gate, a port of type input or type output, located between Node instances in a NodeGraph . The NodePort class is unique to xNode. |
[Input] attribute | The addition of the [Input] attribute to a port designates it as an input, enabling the port to pass values to the node it is part of. Think of the [Input] attribute as a function parameter. |
[Output] attribute | The addition of the [Output] attribute to a port designates it as an output, enabling the port to pass values from the node it is part of. Think of the [Output] attribute as the return value of a function. |
In xNode, we work with graphs where each State
and Transition
takes the form of a node. Input and/or output connection(s) enable the node to relate to any or all other nodes in our graph.
Let’s imagine a node with three input values: two arbitrary and one boolean. The node will output one of the two arbitrary-type input values, depending on whether the boolean input is true or false.
An example Branch
Node
To convert our existing FSM to a graph, we modify the State
and Transition
classes to inherit the Node
class instead of the ScriptableObject
class. We create a graph object of type NodeGraph
to contain all of our State
and Transition
objects.
BaseStateMachine
to Use As a Base TypeWe’ll begin building our graphical interface by adding two new virtual methods to our existing BaseStateMachine
class:
Init | Assigns the initial state to the CurrentState property |
Execute | Executes the current state |
Declaring these methods as virtual allows us to override them, so we can define the custom behaviors of classes inheriting the BaseStateMachine
class for initialization and execution:
using System;
using System.Collections.Generic;
using UnityEngine;
namespace Demo.FSM
{
public class BaseStateMachine : MonoBehaviour
{
[SerializeField] private BaseState _initialState;
private Dictionary<Type, Component> _cachedComponents;
private void Awake()
{
Init();
_cachedComponents = new Dictionary<Type, Component>();
}
public BaseState CurrentState { get; set; }
private void Update()
{
Execute();
}
public virtual void Init()
{
CurrentState = _initialState;
}
public virtual void Execute()
{
CurrentState.Execute(this);
}
// Allows us to execute consecutive calls of GetComponent in O(1) time
public new T GetComponent<T>() where T : Component
{
if(_cachedComponents.ContainsKey(typeof(T)))
return _cachedComponents[typeof(T)] as T;
var component = base.GetComponent<T>();
if(component != null)
{
_cachedComponents.Add(typeof(T), component);
}
return component;
}
}
}
Next, under our FSM
folder, let’s create:
FSMGraph | A folder |
BaseStateMachineGraph | A C# class within FSMGraph |
For the time being, BaseStateMachineGraph
will inherit just the BaseStateMachine
class:
using UnityEngine;
namespace Demo.FSM.Graph
{
public class BaseStateMachineGraph : BaseStateMachine
{
}
}
We can’t add functionality to BaseStateMachineGraph
until we create our base node type; let’s do that next.
NodeGraph
and Creating a Base Node TypeUnder our newly created FSMGraph
folder, we’ll create:
FSMGraph | A class |
For now, FSMGraph
will inherit just the NodeGraph
class (with no added functionality):
using UnityEngine;
using XNode;
namespace Demo.FSM.Graph
{
[CreateAssetMenu(menuName = "FSM/FSM Graph")]
public class FSMGraph : NodeGraph
{
}
}
Before we create classes for our nodes, let’s add:
FSMNodeBase | A class to be used as a base class by all of our nodes |
The FSMNodeBase
class will contain an input named Entry
of type FSMNodeBase
to enable us to connect nodes to one another.
We will also add two helper functions:
GetFirst | Retrieves the first node connected to the requested output |
GetAllOnPort | Retrieves all remaining nodes that connect to the requested output |
using System.Collections.Generic;
using XNode;
namespace Demo.FSM.Graph
{
public abstract class FSMNodeBase : Node
{
[Input(backingValue = ShowBackingValue.Never)] public FSMNodeBase Entry;
protected IEnumerable<T> GetAllOnPort<T>(string fieldName) where T : FSMNodeBase
{
NodePort port = GetOutputPort(fieldName);
for (var portIndex = 0; portIndex < port.ConnectionCount; portIndex++)
{
yield return port.GetConnection(portIndex).node as T;
}
}
protected T GetFirst<T>(string fieldName) where T : FSMNodeBase
{
NodePort port = GetOutputPort(fieldName);
if (port.ConnectionCount > 0)
return port.GetConnection(0).node as T;
return null;
}
}
}
Ultimately, we’ll have two types of state nodes; let’s add a class to support these:
BaseStateNode | A base class to support both StateNode and RemainInStateNode |
namespace Demo.FSM.Graph
{
public abstract class BaseStateNode : FSMNodeBase
{
}
}
Next, modify the BaseStateMachineGraph
class:
using UnityEngine;
namespace Demo.FSM.Graph
{
public class BaseStateMachineGraph : BaseStateMachine
{
public new BaseStateNode CurrentState { get; set; }
}
}
Here, we’ve hidden the CurrentState
property inherited from the base class and changed its type from BaseState
to BaseStateNode
.
Now, to form our FSM’s main building blocks, let’s add three new classes to our FSMGraph
folder:
StateNode | Represents the state of an agent. On execute, StateNode iterates over the TransitionNode s connected to the output port of the StateNode (retrieved by a helper method). StateNode queries each one whether to transition the node to a different state or leave the node's state as is. |
RemainInStateNode | Indicates a node should remain in the current state. |
TransitionNode | Makes the decision to transition to a different state or stay in the same state. |
In the previous Unity FSM tutorial, the State
class iterates over the transitions list. Here in xNode, StateNode
serves as State
’s equivalent to iterate over the nodes retrieved via our GetAllOnPort
helper method.
Now add an [Output]
attribute to the outgoing connections (the transition nodes) to indicate that they should be part of the GUI. By xNode’s design, the attribute’s value originates in the source node: the node containing the field marked with the [Output]
attribute. As we are using [Output]
and [Input]
attributes to describe relationships and connections that will be set by the xNode GUI, we can’t treat these values as we normally would. Consider how we iterate through Actions
versus Transitions
:
using System.Collections.Generic;
namespace Demo.FSM.Graph
{
[CreateNodeMenu("State")]
public sealed class StateNode : BaseStateNode
{
public List<FSMAction> Actions;
[Output] public List<TransitionNode> Transitions;
public void Execute(BaseStateMachineGraph baseStateMachine)
{
foreach (var action in Actions)
action.Execute(baseStateMachine);
foreach (var transition in GetAllOnPort<TransitionNode>(nameof(Transitions)))
transition.Execute(baseStateMachine);
}
}
}
In this case, the Transitions
output can have multiple nodes attached to it; we have to call the GetAllOnPort
helper method to obtain a list of the [Output]
connections.
RemainInStateNode
is, by far, our simplest class. Executing no logic, RemainInStateNode
merely indicates to our agent—in our game’s case, the enemy—to remain in its current state:
namespace Demo.FSM.Graph
{
[CreateNodeMenu("Remain In State")]
public sealed class RemainInStateNode : BaseStateNode
{
}
}
At this point, the TransitionNode
class is still incomplete and will not compile. The associated errors will clear once we update the class.
To build TransitionNode
, we need to get around xNode’s requirement that the value of the output originates in the source node—as we did when we built StateNode
. A major difference between StateNode
and TransitionNode
is that TransitionsNode
’s output may attach to only one node. In our case, GetFirst
will fetch the one node attached to each of our ports (one state node to transition to in the true case and another to transition to in the false case):
namespace Demo.FSM.Graph
{
[CreateNodeMenu("Transition")]
public sealed class TransitionNode : FSMNodeBase
{
public Decision Decision;
[Output] public BaseStateNode TrueState;
[Output] public BaseStateNode FalseState;
public void Execute(BaseStateMachineGraph stateMachine)
{
var trueState = GetFirst<BaseStateNode>(nameof(TrueState));
var falseState = GetFirst<BaseStateNode>(nameof(FalseState));
var decision = Decision.Decide(stateMachine);
if (decision && !(trueState is RemainInStateNode))
{
stateMachine.CurrentState = trueState;
}
else if(!decision && !(falseState is RemainInStateNode))
stateMachine.CurrentState = falseState;
}
}
}
Let’s have a look at the graphical results from our code.
Now, with all the FSM classes sorted out, we can proceed to create our FSM Graph for the game’s enemy agent. In the Unity project window, right-click the EnemyAI
folder and choose: Create > FSM > FSM Graph. To make our graph easier to identify, let’s rename it EnemyGraph
.
In the xNode Graph editor window, right-click to reveal a drop-down menu listing State, Transition, and RemainInState. If the window is not visible, double-click the EnemyGraph
file to launch the xNode Graph editor window.
To create the Chase
and Patrol
states:
Right-click and choose State to create a new node.
Name the node Chase
.
Return to the drop-down menu, choose State again to create a second node.
Name the node Patrol
.
Drag and drop the existing Chase
and Patrol
actions to their newly created corresponding states.
To create the transition:
Right-click and choose Transition to create a new node.
Assign the LineOfSightDecision
object to the transition’s Decision
field.
To create the RemainInState
node:
To connect the graph:
Connect the Patrol
node’s Transitions
output to the Transition
node’s Entry
input.
Connect the Transition
node’s True State
output to the Chase
node’s Entry
input.
Connect the Transition
node’s False State
output to the Remain In State
node’s Entry
input.
The graph should look like this:
The Initial Look at Our FSM Graph
Nothing in the graph indicates which node—the Patrol
or Chase
state—is our initial node. The BaseStateMachineGraph
class detects four nodes but, with no indicators present, cannot choose the initial state.
To resolve this issue, let’s create:
FSMInitialNode | A class whose single output of type StateNode is named InitialNode |
Our output InitialNode
denotes the initial state. Next, in FSMInitialNode
, create:
NextNode | A property to enable us to fetch the node connected to the InitialNode output |
using XNode;
namespace Demo.FSM.Graph
{
[CreateNodeMenu("Initial Node"), NodeTint("#00ff52")]
public class FSMInitialNode : Node
{
[Output] public StateNode InitialNode;
public StateNode NextNode
{
get
{
var port = GetOutputPort("InitialNode");
if (port == null || port.ConnectionCount == 0)
return null;
return port.GetConnection(0).node as StateNode;
}
}
}
}
Now that we created theFSMInitialNode
class, we can connect it to the Entry
input of the initial state and return the initial state via the NextNode
property.
Let’s go back to our graph and add the initial node. In the xNode editor window:
Patrol
node’s Entry
input.The graph should now look like this:
Our FSM Graph With the Initial Node Attached to the Patrol State
To make our lives easier, we’ll add to FSMGraph
:
InitialState | A property |
The first time we try to retrieve the InitialState
property’s value, the getter of the property will traverse all nodes in our graph as it tries to find FSMInitialNode
. Once FSMInitialNode
is located, we use the NextNode
property to find our initial state node:
using System.Linq;
using UnityEngine;
using XNode;
namespace Demo.FSM.Graph
{
[CreateAssetMenu(menuName = "FSM/FSM Graph")]
public sealed class FSMGraph : NodeGraph
{
private StateNode _initialState;
public StateNode InitialState
{
get
{
if (_initialState == null)
_initialState = FindInitialStateNode();
return _initialState;
}
}
private StateNode FindInitialStateNode()
{
var initialNode = nodes.FirstOrDefault(x => x is FSMInitialNode);
if (initialNode != null)
{
return (initialNode as FSMInitialNode).NextNode;
}
return null;
}
}
}
Now, in our BaseStateMachineGraph
, let’s reference FSMGraph
and override our BaseStateMachine
’s Init
and Execute
methods. Overriding Init
sets CurrentState
as the graph’s initial state, and overriding Execute
calls Execute
on CurrentState
:
using UnityEngine;
namespace Demo.FSM.Graph
{
public class BaseStateMachineGraph : BaseStateMachine
{
[SerializeField] private FSMGraph _graph;
public new BaseStateNode CurrentState { get; set; }
public override void Init()
{
CurrentState = _graph.InitialState;
}
public override void Execute()
{
((StateNode)CurrentState).Execute(this);
}
}
}
Now, let’s apply our graph to our Enemy object, and see it in action.
In preparation for testing, in the Unity Editor’s Project window, we need to:
Open the SampleScene asset.
Locate our Enemy
game object in the Unity hierarchy window.
Replace the BaseStateMachine
component with the BaseStateMachineGraph
component:
Click Add Component and select the correct BaseStateMachineGraph
script.
Assign our FSM graph, EnemyGraph
, to the Graph
field of the BaseStateMachineGraph
component.
Delete the BaseStateMachine
component (as it is no longer needed) by right-clicking and selecting Remove Component.
Now the Enemy
game object should look like this:
Enemy
Game Object
That’s it! Now we have a modular FSM with a graphic editor. When we click the Play button, we see our graphically created enemy AI works exactly as our previously created ScriptableObject
enemy.
The advantages of using a graphical editor are self-evident, but I’ll leave you with a word of caution: As you develop more sophisticated AI for your game, the number of states and transitions grows, and the FSM becomes confusing and difficult to read. The graphical editor grows to resemble a web of lines that originate in multiple states and terminate at multiple transitions—and vice versa, making our FSM difficult to debug.
As we did in the previous tutorial, we invite you to make the code your own, and leave the door open for you to optimize your stealth game and address these concerns. Imagine how helpful it would be to color-code your state nodes to indicate whether a node is active or inactive, or resize the RemainInState
and Initial
nodes to limit their screen real estate.
Such enhancements are not merely cosmetic. Color and size references would help us identify where and when to debug. A graph that is easy on the eye is also simpler to assess, analyze, and comprehend. Any next steps are up to you—with the foundation of our graphical editor in place, there’s no limit to the developer experience improvements you can make.
The editorial team of the Toptal Engineering Blog extends its gratitude to Goran Lalić and Maddie Douglas for reviewing the code samples and other technical content presented in this article.
Original article source at: https://www.toptal.com/
1669890624
Build Rest Api Project With Express & MongoDB | Express Router | Node.js Tutorial for Beginners #8
In this video we will start building a contact management Rest API project using Express & MongoDb. And we will start with the project intro, express fundamentals and routing in detail.
⭐️ Support my channel⭐️ https://www.buymeacoffee.com/dipeshmalvia
⭐️ GitHub link for Reference ⭐️ https://github.com/dmalvia/Express_MongoDB_Rest_API_Tutorial
⭐️ Node.js for beginners Playlist ⭐️ https://youtube.com/playlist?list=PLTP3E5bPW796_icZanMqhdg7i0Cl7Y51F
🔥 Video contents... ENJOY 👇
⭐️ JavaScript ⭐️
🔗 Social Medias 🔗
⭐️ Tags ⭐️ - Node.js, Express & MongoDB Project - Build Rest API Project Express & MongoDB - Express Routing Tutorial - Node.Js & Express Crash Course
⭐️ Hashtags ⭐️ #nodejs #express #beginners #tutorial
Disclaimer: It doesn't feel good to have a disclaimer in every video but this is how the world is right now. All videos are for educational purpose and use them wisely. Any video may have a slight mistake, please take decisions based on your research. This video is not forcing anything on you.
1669796397
🕐 TIMESTAMPS:
00:00 Introduction
00:15 Skillshare Sponsorship
01:58 Reduce Function Lesson Intro
03:48 Reduce Function Example #1
05:30 How to write a Reduce Function (2 Methods)
11:57 Reduce Function Example #2
16:20 Reduce Function Lesson Summary
17:42 Outro
Use it when: You have an array of amounts and you want to add them all up.
const euros = [29.76, 41.85, 46.5];
const sum = euros.reduce((total, amount) => total + amount);
sum // 118.11
How to use it:
If you have never used ES6 syntax before, don’t let the example above intimidate you. It’s exactly the same as writing:
var euros = [29.76, 41.85, 46.5];
var sum = euros.reduce( function(total, amount){
return total + amount
});
sum // 118.11
We use const
instead of var
and we replace the word function
with a “fat arrow” (=>
) after the parameters, and we omit the word ‘return’.
I’ll use ES6 syntax for the rest of the examples, since it’s more concise and leaves less room for errors.
Instead of logging the sum, you could divide the sum by the length of the array before you return a final value.
The way to do this is by taking advantage of the other arguments in the reduce method. The first of those arguments is the index. Much like a for-loop, the index refers to the number of times the reducer has looped over the array. The last argument is the array itself.
const euros = [29.76, 41.85, 46.5];
const average = euros.reduce((total, amount, index, array) => {
total += amount;
if( index === array.length-1) {
return total/array.length;
}else {
return total;
}
});
average // 39.37
If you can use the reduce function to spit out an average then you can use it any way you want.
For example, you could double the total, or half each number before adding them together, or use an if statement inside the reducer to only add numbers that are greater than 10. My point is that the Reduce Method In JavaScript gives you a mini CodePen where you can write whatever logic you want. It will repeat the logic for each amount in the array and then return a single value.
The thing is, you don’t always have to return a single value. You can reduce an array into a new array.
For instance, lets reduce an array of amounts into another array where every amount is doubled. To do this we need to set the initial value for our accumulator to an empty array.
The initial value is the value of the total parameter when the reduction starts. You set the initial value by adding a comma followed by your initial value inside the parentheses but after the curly braces (bolded in the example below).
const average = euros.reduce((total, amount, index, array) => {
total += amount
return total/array.length
}, 0);
In previous examples, the initial value was zero so I omitted it. By omitting the initial value, the total will default to the first amount in the array.
By setting the initial value to an empty array we can then push each amount into the total. If we want to reduce an array of values into another array where every value is doubled, we need to push the amount * 2. Then we return the total when there are no more amounts to push.
const euros = [29.76, 41.85, 46.5];
const doubled = euros.reduce((total, amount) => {
total.push(amount * 2);
return total;
}, []);
doubled // [59.52, 83.7, 93]
We’ve created a new array where every amount is doubled. We could also filter out numbers we don’t want to double by adding an if statement inside our reducer.
const euro = [29.76, 41.85, 46.5];
const above30 = euro.reduce((total, amount) => {
if (amount > 30) {
total.push(amount);
}
return total;
}, []);
above30 // [ 41.85, 46.5 ]
These operations are the map and filter methods rewritten as a reduce method.
For these examples, it would make more sense to use map or filter because they are simpler to use. The benefit of using reduce comes into play when you want to map and filter together and you have a lot of data to go over.
If you chain map and filter together you are doing the work twice. You filter every single value and then you map the remaining values. With reduce you can filter and then map in a single pass.
Use map and filter but when you start chaining lots of methods together you now know that it is faster to reduce the data instead.
Use it when: You have a collection of items and you want to know how many of each item are in the collection.
const fruitBasket = ['banana', 'cherry', 'orange', 'apple', 'cherry', 'orange', 'apple', 'banana', 'cherry', 'orange', 'fig' ];
const count = fruitBasket.reduce( (tally, fruit) => {
tally[fruit] = (tally[fruit] || 0) + 1 ;
return tally;
} , {})
count // { banana: 2, cherry: 3, orange: 3, apple: 2, fig: 1 }
To tally items in an array our initial value must be an empty object, not an empty array like it was in the last example.
Since we are going to be returning an object we can now store key-value pairs in the total.
fruitBasket.reduce( (tally, fruit) => {
tally[fruit] = 1;
return tally;
}, {})
On our first pass, we want the name of the first key to be our current value and we want to give it a value of 1.
This gives us an object with all the fruit as keys, each with a value of 1. We want the amount of each fruit to increase if they repeat.
To do this, on our second loop we check if our total contain a key with the current fruit of the reducer. If it doesn’t then we create it. If it does then we increment the amount by one.
fruitBasket.reduce((tally, fruit) => {
if (!tally[fruit]) {
tally[fruit] = 1;
} else {
tally[fruit] = tally[fruit] + 1;
}
return tally;
}, {});
I rewrote the exact same logic in a more concise way up top.
We can use reduce to flatten nested amounts into a single array.
We set the initial value to an empty array and then concatenate the current value to the total.
const data = [[1, 2, 3], [4, 5, 6], [7, 8, 9]];
const flat = data.reduce((total, amount) => {
return total.concat(amount);
}, []);
flat // [ 1, 2, 3, 4, 5, 6, 7, 8, 9 ]
More often than not, information is nested in more complicated ways. For instance, lets say we just want all the colors in the data variable below.
const data = [
{a: 'happy', b: 'robin', c: ['blue','green']},
{a: 'tired', b: 'panther', c: ['green','black','orange','blue']},
{a: 'sad', b: 'goldfish', c: ['green','red']}
];
We’re going to step through each object and pull out the colours. We do this by pointing amount.c for each object in the array. We then use a forEach loop to push every value in the nested array into out total.
const colors = data.reduce((total, amount) => {
amount.c.forEach( color => {
total.push(color);
})
return total;
}, [])
colors //['blue','green','green','black','orange','blue','green','red']
If we only need unique number then we can check to see of the number already exists in total before we push it.
const uniqueColors = data.reduce((total, amount) => {
amount.c.forEach( color => {
if (total.indexOf(color) === -1){
total.push(color);
}
});
return total;
}, []);
uniqueColors // [ 'blue', 'red', 'green', 'black', 'orange']
An interesting aspect of the reduce method in JavaScript is that you can reduce over functions as well as numbers and strings.
Let’s say we have a collection of simple mathematical functions. these functions allow us to increment, decrement, double and halve an amount.
function increment(input) { return input + 1;}
function decrement(input) { return input — 1; }
function double(input) { return input * 2; }
function halve(input) { return input / 2; }
For whatever reason, we need to increment, then double, then decrement an amount.
You could write a function that takes an input, and returns (input + 1) * 2 -1. The problem is that we know we are going to need to increment the amount three times, then double it, then decrement it, and then halve it at some point in the future. We don’t want to have to rewrite our function every time so we going to use reduce to create a pipeline.
A pipeline is a term used for a list of functions that transform some initial value into a final value. Our pipeline will consist of our three functions in the order that we want to use them.
let pipeline = [increment, double, decrement];
Instead of reducing an array of values we reduce over our pipeline of functions. This works because we set the initial value as the amount we want to transform.
const result = pipeline.reduce(function(total, func) {
return func(total);
}, 1);
result // 3
Because the pipeline is an array, it can be easily modified. If we want to decrement something three times, then double it, decrement it , and halve it then we just alter the pipeline.
var pipeline = [
increment,
increment,
increment,
double,
decrement,
halve
];
The reduce function stays exactly the same.
If you don’t pass in an initial value, reduce will assume the first item in your array is your initial value. This worked fine in the first few examples because we were adding up a list of numbers.
If you’re trying to tally up fruit, and you leave out the initial value then things get weird. Not entering an initial value is an easy mistake to make and one of the first things you should check when debugging.
Another common mistake is to forget to return the total. You must return something for the reduce function to work. Always double check and make sure that you’re actually returning the value you want.
Tools, Tips & References
accumulator
. It is important to know this because most people will refer to it as an accumulator if you read about it online. Some people call it prev
as in previous value. It all refers to the same thing. I found it easier to think of a total when I was learning reduce.Thanks for reading!
#javascript #tutorial #beginner #reduce
1669357140
React Redux Tutorial – Efficient Management of States in React
React Redux Tutorial
React is one of the most popular JavaScript libraries which is used for front-end development. It has made our application development easier and faster by providing a component-based approach.
As you might know, it’s not the complete framework but just the view part of the MVC (Model-View-Controller) framework. So, how do you keep track of the data and handle the events in the applications developed using React? Well, this is where Redux comes as a savior and handles the data flow of the application from the backend.
Through this blog on React Redux tutorial, I will explain everything you need to know on how to integrate Redux with React applications. Below are the topics I will be discussing under React Redux tutorial:
Components Of Redux
React with Redux
As I have already mentioned that React follows the component-based approach, where the data flows through the components. In fact, the data in React always flows from parent to child components which makes it unidirectional. This surely keeps our data organized and helps us in controlling the application better. Because of this, the application’s state is contained in specific stores and as a result, the rest of the components remain loosely coupled. This makes our application more flexible leading to increased efficiency. That’s why communication from a parent component to a child component is convenient.
But what happens when we try to communicate from a non-parent component?
A child component can never pass data back up to the parent component. React does not provide any way for direct component-to-component communication. Even though React has features to support this approach, it is considered to be a poor practice. It is prone to errors and leads to spaghetti code. So, how can two non-parent components pass data to each other?
This is where React fails to provide a solution and Redux comes into the picture.
Redux provides a “store” as a solution to this problem. A store is a place where you can store all your application state together. Now the components can “dispatch” state changes to the store and not directly to the other components. Then the components that need the updates about the state changes can “subscribe” to the store.
Thus, with Redux, it becomes clear where the components get their state from as well as where should they send their states to. Now the component initiating the change does not have to worry about the list of components needing the state change and can simply dispatch the change to the store. This is how Redux makes the data flow easier.
Just like React, Redux is also a library which is used widely for front-end development. It is basically a tool for managing both data-state and UI-state in JavaScript applications. Redux separates the application data and business logic into its own container in order to let React manage just the view. Rather than a traditional library or a framework, it’s an application data-flow architecture. It is most compatible with Single Page Applications (SPAs) where the management of the states over time can get complex. Check out this Full Stack developer course today to learn about React redux.
Redux was created by Dan Abramov and Andrew Clark around June 2015. It was inspired by Facebook’s Flux and influenced by functional programming language Elm. Redux got popular very quickly because of its simplicity, small size (only 2 KB) and great documentation.
Principles Of Redux
Redux follows three fundamental principles:
Single source of truth: The state of the entire application is stored in an object/ state tree within a single store. The single state tree makes it easier to keep track of the changes over time and debug or inspect the application. For a faster development cycle, it helps to persist the application’s state in development.
State is read-only: The only way to change the state is to trigger an action. An action is a plain JS object describing the change. Just like the state is the minimal representation of data, the action is the minimal representation of the change to that data. An action must have a type property (conventionally a String constant). All the changes are centralized and occur one by one in a strict order.
Changes are made with pure functions: In order to specify how the state tree is transformed by actions, you need pure functions. Pure functions are those whose return values depend solely on the values of their arguments. Reducers are just pure functions that take the previous state and an action and return the next state. You can have a single reducer in your application and as it grows, you can split it off into smaller reducers. These smaller reducers will then manage specific parts of the state tree.
Following are some of the major advantages of Redux:
Redux has four components.
Let us discuss them in detail:
Action – The only way to change state content is by emitting an action. Actions are the plain JavaScript objects which are the main source of information used to send data (user interactions, internal events such as API calls, and form submissions) from the application to the store. The store receives information only from the actions. You have to send the actions to the store using store.dispatch().
Internal actions are simple JavaScript objects that have a type property (usually String constant), describing the type of action and the entire information being sent to the store.
{
type: ADD_TODO,
text
}
Actions are created using action creators which are the normal functions that return actions.
function addTodo(text) {
return {
type: ADD_TODO,
text
}
}
To call actions anywhere in the app, use dispatch()method:
dispatch(addTodo(text));
Reducer – Actions describe the fact that something happened, but don’t specify how the application’s state changes in response. This is the job of reducers. It is based on the array reduce method, where it accepts a callback (reducer) and lets you get a single value out of multiple values, sums of integers, or an accumulation of streams of values. In Redux, reducers are functions (pure) that take the current state of the application and an action and then return a new state. Understanding how reducers work is important because they perform most of the work.
function reducer(state = initialState, action) {
switch (action.type) {
case ADD_TODO:
return Object.assign({}, state,
{ todos: [ ...state.todos,
{
text: action.text,
completed: false
}
]
})
default:
return state
}
}
Store – A store is a JavaScript object which can hold the application’s state and provide a few helper methods to access the state, dispatch actions and register listeners. The entire state/ object tree of an application is saved in a single store. As a result of this, Redux is very simple and predictable. We can pass middleware to the store to handle the processing of data as well as to keep a log of various actions that change the state of stores. All the actions return a new state via reducers.
import { createStore } from 'redux'
import todoApp from './reducers'
let store = createStore(reducer);
Following is a diagram which shows how the data actually flows through all the above-described components in Redux.
Now that you are familiar with Redux and its components, let’s now see how you can integrate it with a React application.
STEP 1: You need to setup the basic react, webpack, babel setup. Following are the dependencies we are using in this application.
"dependencies": {
"babel-core": "^6.10.4",
"babel-loader": "^6.2.4",
"babel-polyfill": "^6.9.1",
"babel-preset-es2015": "^6.9.0",
"babel-preset-react": "^6.11.1",
"babel-register": "^6.9.0",
"cross-env": "^1.0.8",
"css-loader": "^0.23.1",
"expect": "^1.20.1",
"node-libs-browser": "^1.0.0",
"node-sass": "^3.8.0",
"react": "^15.1.0",
"react-addons-test-utils": "^15.1.0",
"react-dom": "^15.1.0",
"react-redux": "^4.4.5",
"redux": "^3.5.2",
"redux-logger": "^2.6.1",
"redux-promise": "^0.5.3",
"redux-thunk": "^2.1.0",
"sass-loader": "^4.0.0",
"style-loader": "^0.13.1",
"webpack": "^1.13.1",
"webpack-dev-middleware": "^1.6.1",
"webpack-dev-server": "^1.14.1",
"webpack-hot-middleware": "^2.11.0"
},
STEP 2: Once you are done with installing the dependencies, then create a components folder in src folder. Within that create App.js file.
import React from 'react';
import UserList from '../containers/user-list';
import UserDetails from '../containers/user-detail';
require('../../scss/style.scss');
const App = () => (
<div>
<h2>User List</h2>
<UserList />
<hr />
<h2>User Details</h2>
<UserDetails />
</div>
);
export default App;
STEP 3: Next create a new actions folder and create index.js in it.
export const selectUser = (user) => {
console.log("You clicked on user: ", user.first);
return {
type: 'USER_SELECTED',
payload: user
}
};
STEP 4: Now create user-details.js in a new folder called containers.
import React, {Component} from 'react';
import {connect} from 'react-redux';
class UserDetail extends Component {
render() {
if (!this.props.user) {
return (<div>Select a user...</div>);
}
return (
<div>
<img height="150" width="150" src={this.props.user.thumbnail} />
<h2>{this.props.user.first} {this.props.user.last}</h2>
<h3>Age: {this.props.user.age}</h3>
<h3>Description: {this.props.user.description}</h3>
</div>
);
}
}
function mapStateToProps(state) {
return {
user: state.activeUser
};
}
export default connect(mapStateToProps)(UserDetail);
STEP 5: Inside the same folder create user-list.js file.
import React, {Component} from 'react';
import {bindActionCreators} from 'redux';
import {connect} from 'react-redux';
import {selectUser} from '../actions/index'
class UserList extends Component {
renderList() {
return this.props.users.map((user) => {
return (
<li key={user.id}
onClick={() => this.props.selectUser(user)}
>
{user.first} {user.last}
</li>
);
});
}
render() {
return (
<ul>
{this.renderList()}
</ul>
);
}
}
function mapStateToProps(state) {
return {
users: state.users
};
}
function matchDispatchToProps(dispatch){
return bindActionCreators({selectUser: selectUser}, dispatch);
}
export default connect(mapStateToProps, matchDispatchToProps)(UserList);
STEP 6: Now create reducers folder and create index.js within it.
import {combineReducers} from 'redux';
import UserReducer from './reducer-users';
import ActiveUserReducer from './reducer-active-user';
const allReducers = combineReducers({
users: UserReducer,
activeUser: ActiveUserReducer
});
export default allReducers
STEP 7: Within the same reducers folder, create reducer-users.js file.
export default function () {
return [
{
id: 1,
first: "Maxx",
last: "Flinn",
age: 17,
description: "Loves basketball",
thumbnail: "<a href="https://goo.gl/1KNpiy">https://goo.gl/1KNpiy</a>"
},
{
id: 2,
first: "Allen",
last: "Matt",
age: 25,
description: "Food Junky.",
thumbnail: "<a href="https://goo.gl/rNLgwv">https://goo.gl/rNLgwv</a>"
},
{
id: 3,
first: "Kris",
last: "Chen",
age: 23,
description: "Music Lover.",
thumbnail: "<a href="https://goo.gl/EVbPHb">https://goo.gl/EVbPHb</a>"
}
]
}
STEP 8: Now within reducers folder create a reducer-active-user.js file.
export default function (state = null, action) {
switch (action.type) {
case 'USER_SELECTED':
return action.payload;
break;
}
return state;
}
STEP 9: Now you need to create index.js in the root folder.
import 'babel-polyfill';
import React from 'react';
import ReactDOM from "react-dom";
import {Provider} from 'react-redux';
import {createStore, applyMiddleware} from 'redux';
import thunk from 'redux-thunk';
import promise from 'redux-promise';
import createLogger from 'redux-logger';
import allReducers from './reducers';
import App from './components/App';
const logger = createLogger();
const store = createStore(
allReducers,
applyMiddleware(thunk, promise, logger)
);
ReactDOM.render(
<Provider store={store}>
<App />
</Provider>,
document.getElementById('root')
);
STEP 10: Now that you are done writing the codes, launch your application at localhost:3000.
This brings us to the end of the blog on React Redux tutorial. I hope through this React Redux tutorial blog I was able to clearly explain what is Redux, its components, and why we use it with React. You can refer to this blog on ReactJS Tutorial, in case you want to learn more about React.
If you want to get trained in React and wish to develop interesting UI’s on your own, then check out the React JS Certification or Web Development Certification Training by Edureka, a trusted online learning company with a network of more than 250,000 satisfied learners spread across the globe.
Got a question for us? Please mention it in the comments section and we will get back to you.
Original article source at: https://www.edureka.co/
1669288987
Build React CRUD Admin panel with Ant Design | Refine Tutorial | React Admin Crash Course
In this video we will build a React admin panel for a Content Management system app. We will learn how to consume Rest API and add CRUD functionality using Refine which is a react based framework and we will also use Ant design components with refine for designing our admin panel.
⭐️ Refine - React Framework⭐️ Refine is a 100% open-source, headless React framework for CRUD apps, So you can quickly build internal tools, admin panels, and dashboards while remaining flexible.
GitHub: https://github.com/refinedev/refine
⭐️ Support my channel⭐️ https://www.buymeacoffee.com/dipeshmalvia
⭐️ Tutorial reference links⭐️
🔥 Video contents... ENJOY 👇
⭐️ React Roadmap for Developers ⭐️
⭐️ JavaScript ⭐️
🔗 Social Medias 🔗
⭐️ Tags ⭐️ - React CRUD Admin Panel - Build React Admin App From Scratch - React CRUD Admin Panel Tutorial - How to Build Admin Panel in React.js
⭐️ Hashtags ⭐️ #react #admin #beginners #tutorial
Disclaimer: It doesn't feel good to have a disclaimer in every video but this is how the world is right now. All videos are for educational purposes and use them wisely. Any video may have a slight mistake, please make decisions based on your research. This video is not forcing anything on you.
https://youtu.be/eDcxcTSQJaA
1669269028
Power BI Tutorial: A Step by Step Guide with Examples
The concept of Business Intelligence is something that is alien to very few people these days. With newer tools emerging every day to help solve the crisis of data management, most organizations have already moved in or have plans to use Business Intelligence in solving their crisis. Power BI is Microsoft’s latest BI tool mainly aimed to help everyone analyze and visualize their data. This Power BI tutorial for beginners will give you a complete insight into Power BI in the following sequence:
You may go through this Microsoft Power BI recording where our Power BI Certification Training expert has explained the topics in a detailed manner with examples that will help you to understand the concepts better.
Let us begin this Power BI tutorial by addressing the most essential and fundamental question, what exactly is Business Intelligence?
In an age where Business Intelligence has become a bigger domain than most trending technologies if you ask twenty people what the term business intelligence means, you are likely to get ten different answers. So let me put it in the simplest terms without losing the technicality of it. Business intelligence (BI) is the set of techniques and tools for the transformation of raw data into meaningful and useful information for business analysis. To put it simply, Business intelligence is the technology that gets the right data to the right people, at the right time so that they can make more effective business decisions.
The image below shows the benefits of Business Intelligence.
Over the years, the process of business intelligence has grown and adapted to help solve almost all the challenges while dealing with data by involving newer tools and techniques. The change that Business Intelligence has seen over the years can be divided into 3 waves, so let us continue with our Power BI tutorial and take a look at these three waves.
1st Wave: Technical (IT To End User)
During the first wave of business intelligence, the end-user had to be dependent on the IT department for data insights. This is because it was not possible for end-users to create visualizations/ reports on their own as tools available required technical knowledge. This dependence on the IT department for insights resulted in more effort and time consumption to get the updates done.
2nd Wave: Self-Service (Analyst To End User)
The second wave gave analysts access to BI. Now, people with some knowledge of analytics could use the BI tools. This meant more teams had access to BI and more people could have better data insights, this eased the role of IT teams.
3rd Wave: Everyone (End User)
The third wave has made it easier to access data and create reports, visuals to get better business insights. The introduction of tools like Power BI made this transition easy. Now anybody who has basic understanding of the data can create reports to build intuitive and shareable dashboards.
This was about BI, now let us continue with our Power BI tutorial and understand another important topic that is associated with BI.
In a nutshell, data visualization is nothing but the pictorial or graphical representation of information/ data. It provides insights into complex data sets by communicating the key aspects in more intuitive and meaningful ways. Data visualization lies at the intersection of design, communication, and information science.
Even though data visualization has been termed as the key skill for research in the twenty-first century, it goes way back. It existed in the late 18th century and can be traced back to when William Playfair invented geometrical charts. His bar charts were used to represent Scotland’s imports and exports of 17 countries in 1781. These bar charts constituted a pure solution to the problem of discrete quantitative comparison.
The way, human brain processes information, it is easier to use images, charts, or graphs to understand and to visualize large amounts of complex data than to go through spreadsheets or reports. Take any image, for example, we all know the phrase ‘An image is worth a thousand words. This is completely true because images aren’t just a mere collection of pixels, they also hold a lot of information. This information in visual form is easy to understand than reading the same facts in text form.
Data visualization is a quick and easy way to convey concepts or information in a universal manner. Data visualization can help to:
This was about data visualization. Next, in this Power BI tutorial, we would see why is Power BI important.
The following points make Power BI one of the prominent tools for data visualization. This Power BI tutorial would be incomplete without understanding these points.
The above-mentioned reasons make Power BI very important in the context of data visualization. Let us continue with this Power BI tutorial for Beginners and understand What is Power BI.
Power BI, well this name has been in the BI market for quite a long time. Microsoft team has worked for a long time to build a big umbrella called Power BI, this umbrella is a combination of a strong visualization, data analysis, and a cloud-based tool.
To define it, Power BI is a business analytics service provided by Microsoft. It provides interactive visualizations with self-service business intelligence capabilities, where end users can create reports and dashboards by themselves, without having to depend on information technology staff or database administrators.
Power BI also gives you cloud-based BI services, known as “Power BI Services”, along with a desktop-based interface, called “Power BI Desktop”. It offers data warehouse capabilities, including data preparation, data discovery, and interactive dashboards. In March 2016, Microsoft released an additional service called Power BI Embedded on its Azure cloud platform which enables the user to analyze data easily, perform various ETL operations and deliver reports with Power BI.
Power BI gateways let you connect with SQL Server databases, Analytical Services, and many other data sources to your dashboard in Power BI and reporting portals, embed Power BI reports and dashboards to give you a unified experience. The image below shows Power BI’s general workflow.
Now that we have understood what Power BI is, let us try and understand its important components in the next topic of this Power BI tutorial.
Power BI has the following components:
Now that we have seen the above mentioned components. Let us continue with this Power BI tutorial and understand Power BI’s architecture.
The following image shows Power BI’s architecture.
Power BI’s architecture has three phases. The first two phases partially use ETL (Extract, Transform and Load) to handle the data. Let us take a look at these phases one by one:
An organisation can be required to deal with data that comes from different sources. The data from data sources can be in different file formats. The data is first extracted from different sources which can be your different servers or databases etc. This data is then integrated in a standard format and then stored at a common area called as staging area.
The integrated data is still not ready for visualization because the data needs processing before it can be presented. This data is pre-processed or cleaned. For example, missing values or redundant values are removed from the data set. After the data is cleaned, business rules are applied to the data and it is transformed into presentable data. This data is then loaded into the Data Warehouse.
So once the data is loaded and processed now it can be visualized much better with use of various visualizations that Power BI has to offer. Use of reports, dashboards help one represent data in more intuitive manner. These visuals, reports help business end users to take business decisions based on the insights.
Features of Power BI
Power BI offers the functionality to visually represent our data or a subset of it so that it can be used to draw inferences or gain a deeper understanding of the data. These visuals can be bar graphs, pie charts, etc. Following are some examples of basic visual options provided in Power BI-
The following is an example of 4 basic visuals(Slicer, table, pie chart and stacked column chart) created using Power BI.
Apart from these basic visuals, there are options of obtaining more visuals as well. By clicking on the ‘Get more visuals’ option we obtain the following options-
Custom visual files– Custom visuals can be coded and stored in files with .pbiviz extension. This option enables users to import such visuals.
Organisational visuals– This option can be used to import visuals specific to the user’s organization.
Marketplace visuals-It is used to import visuals from Microsoft and its fellow community members.
Datasets in Power BI can be sourced from a variety of sources.
Some common examples of data sources are-
Excel
Power BI datasets
Power BI dataflows
SQL Server
MySQL database
Analysis Services
Azure
Text/CSV
Oracle
Access
XML
JSON
While sourcing the data, instead of importing the entire dataset, the user can source a subset of it. This subset may be as per the user requirement. Data may be integrated with Excel, SQL database, Azure, Facebook, MailChimp, etc.
Data can be sourced from either a single source or from more than one source. The following is an example of a dataset sourced in Power BI-
Click on Transform data.
The user can choose the rows or columns as required by him and thus create the desired subset. This selection can be based on a condition such as selecting rows containing values for a particular field in a specific range.
The following image shows a filter applied to the Pclass field in the above dataset.
After applying the filter it shows only the rows belonging to Pclass 2 and 3.
A collection of visualizations relevant to a particular topic in Power BI form a dashboard. A combination of these dashboards forms a report. A report contains visuals related to a particular topic. The user may add any number of pages in the report. Each page is a single screen containing the visuals. The pages can be arranged in the order as required by the user.
The image below shows a sample report.
All the visuals appearing on a single Power BI page form a dashboard. It is a single page in a report. The visuals can be arranged in any order or position. Since it is a single page a dashboard generally contains only the most important or relevant visuals. Each dashboard can be shared with other users as well.
In Power BI, a tile is a single visualization found in a report or on a dashboard. A tile can be thought of as a square or rectangular boundary containing a single visual.
The height and width of each tile are adjustable. The order or position of each tile on the dashboard is adjustable as well.
The Navigation pane is present on the top of the Power BI screen. It has the following tabs-
There is a range of options in each tab to work with.
Click on the Q&A button in the Insert tab. The Q & A question box is available where users can type any question related to the data in natural language. Power BI will automatically try to auto complete the question using techniques like rephrasing, autofill, suggestions, etc. The answer is returned in form of visual or text. The user has the option of converting the Text reply to a visual as well.
The below image shows a question asked in natural language (spelling corrected automatically) and its answer in number which can also be converted to a visual.
The following image shows the answer converted to a visual.
To perform functions on data, the user can use some predefined DAX Data Analysis functions. There are currently around 200 DAX predefined functions available in power BI. DAX or Data Analysis Expressions is a language used to interact with data on platforms like Power BI, PowerPivot and SSAS. It is simple and easy to learn and use.
10. Support & suggestion
Power BI can be integrated with R scripts as well. This helps in data cleaning, data shaping and thus obtaining advanced analytics.
Power BI provides robust security where access to each member is controlled. It provides quick responses to security threats. It also provides features like continuous monitoring, reporting, data protection, and unified endpoint management.
A collection of data that can be imported in PowerBI is known as a dataset. Through the Get Data feature, Power BI users can select from a range of data sources. The data sources can range anywhere from on-premise to cloud-based, unstructured to structured. New data sources are added every month. Data may be sourced from one or many different sources that can be combined together.
To source the data, click on the Get Data icon on the top of the screen. The data sources available for each category are as follows-
File category:
Database category
Power Platform category
Azure category
Online Services category
Other categories
The following are some of the companies currently using Power BI-
2. Click on the Download Free Button. The following Page appears.
3. Choose the language and click the Download button. The following page appears-
4. Select the file to download and click Next. The Power BI setup is downloaded.
5. Open the Power BI setup.
6.
7. Click Next.
8. Accept the terms and click next.
9. Select the Destination folder as required and click next.
10.
11. Click on Install.
The setup is installed.
This video will help you to understand what is BI as well as Power BI. Then moving on in this video we have discussed the components and building blocks of Power BI.
Original article source at: https://www.edureka.co/
1669211583
ReactJS Tutorial – Design Your Web UI Using ReactJS JavaScript Library
Most of you would have heard about ‘ReactJS’ also known as React. For those of you curious to know more, I’ll be covering all the core concepts of React you need to know. By the end of this ReactJS tutorial, I’m confident that you will be clear with all the fundamentals of React. Let me start by giving you an overview of what I’ll be covering in this ReactJS tutorial.
You may go through this recording of ReactJS Tutorial where our React training expert has explained the topics in a detailed manner with examples that will help you to understand the concept better.
React is a JavaScript library used to build the user interface for web applications. React was initially developed and maintained by the folks at Facebook, which was later used in their products (WhatsApp & Instagram). Now it is an open source project with an active developer community. Popular websites like Netflix, Airbnb, Yahoo!Mail, KhanAcademy, Dropbox and many more use React to build their UI. Modern websites are built using MVC (model view controller) architecture. React is the ‘V’ in the MVC which stands for view, whereas the architecture is provided by Redux or Flux. React native is used to develop mobile apps, the Facebook mobile app is built using React native.
Facebook’s annual F8 Developer conference 2017, saw two promising announcements: React Fiber and ReactVR. React Fiber is a complete rewrite of the previous release focusing on incremental rendering and quick responsiveness, React Fiber is backward compatible with all previous versions. ReactVR is built on top of React Native frameworks, it enables developing UI with the addition of 3D models to replicate 360-degree environment resulting in fully immersive VR content.
“Let’s just write less and do more!!”
React is among the easiest JS libraries you can start with. Conventional Vanilla JavaScript is more time-consuming, why waste time writing lengthy code when u can get things done smoothly with React. React has over 71,200 stars on GitHub, making it the 4th most starred project of all time. After looking at the below example, I am sure you would understand why front-end developers across the world are switching to React. Now let’s try coding a set of nested lists in React and compare it with conventional JavaScript syntax. To learn more about react, check out this Web developer course today.
Example: 30 lines of code in Vanilla JavaScript can be replaced by just 10 lines of React code, isn’t that awesome!!
React
<ol>
<li>List item 1 </li>
<li>List item 2 (child list)
<ul>
<li>Subitem 1</li>
<li>Subitem 2</li>
</ul>
</li>
<li>Final list item</li>
</ol>
Equivalent Vanilla JavaScript
React.createElement(
"ol",
null,
React.createElement(
"li",
null,
"List item 1 "
),
React.createElement(
"li",
null,
"List item 2 (child list)",
React.createElement(
"ul",
null,
React.createElement(
"li",
null,
"Subitem 1"
),
React.createElement(
"li",
null,
"Subitem 2"
)
)
),
React.createElement(
"li",
null,
"Final list item"
)
);
As you have already figured it out when the complexity increases, the JavaScript code generated becomes unmanageable. This is where JSX comes to the rescue ensuring the code is short and easily readable.
Figure: ReactJS Tutorial – Dependencies
Before we dive deeper into this ReactJS tutorial, let me first introduce you to some key terms you need to be familiar with.
JSX (JavaScript Extension)
JSX Allows us to include ‘HTML’ in the same file along with ‘JavaScript’ (HTML+JS=JSX). Each component in React generates some HTML which is rendered by the DOM.
ES6 (ES2015)
The sixth version of JavaScript is standardized by ECMA International in 2015. Hence the language is referred to as ECMAScript. ES6 is not completely supported by all modern browsers.
ES5(ES2009)
This is the fifth JavaScript version and is widely accepted by all modern browsers, it is based on the 2009 ECMA specification standard. Tools are used to convert ES6 to ES5 during runtime.
Webpack
A module bundler which generates a build file joining all the dependencies.
Babel
This is the tool used to convert ES6 to ES5. This is done because not all web browsers can render React (ES6+JSX) directly.
Figure: ReactJS Tutorial – React Features
React has a shallow learning curve and it is suitable for beginners. ES6 syntax is easier to manage especially for smaller to-do apps. In React, you code in the ‘JavaScript’ way, giving you the freedom to choose your tool depending upon your need. Angular expects you to learn one additional tool ‘typescript’ which can be viewed as the ‘Angular’ way of doing things. In ‘Angular’ you need to learn the entire framework even if you’re just building a simple UI application.
Moving ahead in this ReactJS tutorial, I will be discussing about React’s Virtual DOM.
Figure : ReactJS Tutorial – React Virtual DOM
In contrary to the actual DOM, react makes use of the Virtual DOM. Virtual DOM utilizes a differential algorithm for making calculations. This relieves the real DOM which can then process other tasks. Let me illustrate this with an example.
Now consider there are 10,000 nodes out of which we only need to work on 2 nodes. Now most of the processing is wasted in traversing those 10,000 nodes while we only operate on 2 nodes. The calculations are done by the Virtual DOM to find those 2 nodes and the real DOM quickly retrieves them.
When it comes to performance, React sits right at the top. React is known for its superior rendering speed. Thus the name “React”, an instant reaction to change with minimum delay. DOM manipulation is the heart of a responsive website, unfortunately it is slow in most JavaScript frameworks. However, Virtual DOM is implemented in React, hence it is the underlying principle behind React’s superior performance.
As we already know, React is not a framework, thus features may be added according to the user’s needs. This is the principle behind the light-weight applications built on React – pick only what is needed. Webpack offers several plugins which further minimize (minify) the size during production, The React + Redux bundle minified constitutes around 200 kb whereas its rival Angular is almost four times bigger (Angular + RxJS bundle).
There will be a point when a developer goes through a roadblock. It could be as simple as a ‘missing bracket’ or as tricky as a ‘segmentation fault’. In any case, the earlier the exception is caught the lesser is the cost overhead. React uses compile time debugging and detects errors at an early stage. This ensures that errors don’t silently turn up at run-time. Facebook’s unidirectional data flow allows clean and smooth debugging, fewer stack traces, lesser clutter and an organized Flux architecture for bigger applications.
While React is easier to learn for beginners with no prior JavaScript experience, the nitty gritty’s of transpiling JSX code can often be overwhelming. This sets the tone for tools such as Babel and Webpack. Webpack and Babel bundle together all the JavaScript files into a single file. Just like how we use to include a link to the CSS and JS files in our HTML code, Webpack performs a similar function eliminating the need for explicitly linking files.
I’m sure all of you use Facebook. Now, imagine Facebook being split into components, each functionality is assigned to a specific component and each component produces some HTML which is rendered as output by the DOM.
Facebook Components
To make things clear, refer to the image below.
Figure: ReactJS Tutorial – Facebook Components
Moving on to the core aspect of our ReactJS tutorial, let us discuss the building blocks of React.
The entire application can be modeled as a set of independent components. Different components are used to serve different purposes. This enables us to keep logic and views separate. React renders multiple components simultaneously. Components can be either stateful or stateless.
Before we start creating components, we need to include a few ‘import’ statements.
In the first line, we have to instruct JavaScript to import the ‘react’ library from the installed ‘npm’ module. This takes care of all the dependencies needed by React.
import React from 'react';
The HTML generated by the component needs to be displayed on to the DOM, we achieve this by specifying a render function which tells React where exactly it needs to be rendered (displayed) on the screen. For this, we make a reference to an existing DOM node by passing a container element.
In React, the DOM is part of the ‘react-dom’ library. So in the next line, we have to instruct JavaScript to import ‘react-dom’ library from the installed npm module.
import ReactDOM from 'react-dom';
In our example, we create a component named ‘MyComponent’ which displays a welcome message. We pass the component instance ‘<MyComponent>’ to React along with its container ‘<div >’ tag.
const MyComponent =()=> {
{ return
<h2>Way to go you just created a component!!</h2>
;
}
}
ReactDOM.render(<MyComponent/>, document.getElementById('root'));
Props
“All the user needs to do is, change the parent component’s state, while the changes are passed down to the child component through props.”
Props is a shorthand for properties (You guessed it right!). React uses ‘props’ to pass attributes from ‘parent’ component to ‘child’ component.
Props are the argument passed to the function or component which is ultimately processed by React. Let me illustrate this with an example.
function Message(props) {
return
<h1>Good to have you back, {props.username}</h1>
;
}
function App() {
return (
<div>
<Message username="jim" />
<Message username="duke" />
<Message username="mike" />
</div>
);
}
ReactDOM.render(
<App/>,
document.getElementById('root')
);
Here the ‘App’ component has passed three ‘Message’ component instances with the prop ‘username’. All the three usernames are passed as an argument to the Message component.
The output screen is as shown below:
Figure: ReactJS Tutorial – Props Output
“And I believe state adds the greatest value to React.”
State allows us to create components that are dynamic and interactive. State is private, it must not be manipulated from the outside. Also, it is important to know when to use ‘state’, it is generally used with data that is bound to change. For example, when we click a toggle button it changes from ‘inactive’ to ‘active’ state. State is used only when needed, make sure it is used in render() otherwise don’t include it in the state. We do not use ‘state’ with static components. The state can only be set inside the constructor. Let’s include some code snippets to explain the same.
class Toggle extends React.Component {
constructor(value)
{
super(value);
this.state = {isToggleOn: true};
this.handleClick = this.handleClick.bind(this);
}
Binding is needed explicitly, as by default the event is not bound to the component.
Whenever an event such as a button click or a mouse hover occurs, we need to handle these events and perform the appropriate actions. This is done using event handlers.
While State is set only once inside the constructor it can however, be manipulated through “setState” command. Whenever we call “handleclick” function based on the previous state, “isToggleOn” function is switched between “active” and “inactive” state.
handleClick()
{
this.setState(prevState =>({
isToggleOn: !prevState.isToggleOn
}));
}
The OnClick attribute specifies the function to be executed when the target element is clicked. In our example, whenever “onclick” is heard, we are telling React to transfer control to handleClick() which switches between the two states.
render()
{
return(
<button onClick={this.handleClick}>
{this.state.isToggleOn ? 'ON': 'OFF'}
);
}
}// end class
We need to initialize resources to components according to their requirements. This is called “mounting” in React. It is critical to clear these resources taken by the components when they are destroyed. This is because performance can be managed and unused resources can be destroyed. This is called “unmounting” in React. It is not essential to use state lifecycle methods, but use them if you wish to take control of the complete resource allocations and retrieval process. State lifecycle methods component DidMount() and componentWillUnmount() are used to allocate and release resources respectively.
Class Time extends React.component
{
constructor(value) {
super(value);
this.state = {date: new Date()};
}
We create an object called Timer ID and set an interval of 2 seconds. Now, this is the time interval based on which the page is refreshed.
componentDidMount() {
this.timerID = setInterval( () => this.tick(),2000);
}
Here the interval is the timeframe after which the resources are cleared and the component should be destroyed. Performing such manipulations on the dataset using ‘state’ can be viewed as an optimal approach.
componentWillUnmount() {clear interval(this.timerID);}
A timer is set to call tick() method once every two seconds. An object with current Date is passed to set state. Each time React calls the render() method, this.state.date value is different. React then displays the updated time on the screen.
tick(){this.setState({date:new Date()});}
render()
{
return (
<div>
<h2>The Time is {this.state.date.toLocaleTimeString()}.</h2>
</div>
);
}
ReactDOM.render( <Time />, document.getElementById('root') );
}// end class
Keys in React provide identity to components. Keys are the means by which React identifies components uniquely. While working with individual components we don’t need keys as react takes care of key assignments according to their rendering order. However, we need a strategy to differentiate between thousands of elements in a list. We assign them ‘keys’. If we need to access the last component in a list using keys, it saves us from traversing the entire list sequentially. Keys serve to keep track of which items have been manipulated. Keys should be given to the elements inside the array to give the elements a stable identity.
In our example below, we create an array ‘Data’ with four items, we assign each item the index ‘i’ as the key. We achieve this by defining the key as a property(‘Prop’) and use the JavaScript ‘map’ function to pass the key on each element of the array and return the result to the ‘content’ component.
class App extends React.Component {
constructor() {
super();
this.state = {
data:
[
{
item: 'Java',
id: '1'
},
{
item: 'React',
id: '2'
},
{
item: 'Python',
id: '3'
},
{
item: 'C#',
id: '4'
}
]
}
render() {
return (
<div>
<div>
{this.state.data.map((dynamicComponent, i) => <Content key = {i} componentData = {dynamicComponent}/>)}
</div>
</div>
);
}
}
class Content extends React.Component {
render() {
return (
<div>
<div>{this.props.componentData.component}</div>
<div>{this.props.componentData.id}</div>
</div>
);
}
}
ReactDOM.render(
<App/>,
document.getElementById('root'));
There are several ways to install React. In short, we can either configure the dependencies manually or use the open source starter packs available on GitHub. The ‘create-react-app’ (CRA) tool maintained by Facebook itself is one such example. This is suitable for beginners who can focus on code, without manually having to deal with transpiling tools like webpack and Babel. In this ReactJS tutorial I will be showing you how to install React using CRA tool.
Npm: Node Package Manager manages the different dependencies needed to run ReactJs applications. Npm is bundled together with node.js.
Step 1: Download NodeJS
First go to the node.js website and download the .exe file according to your system configuration and install it.
Link: https://nodejs.org/en/download/
Step 2: Download the ‘create-react-app’ Tool from GitHub
Link: https://github.com/facebookincubator/create-react-app
Step 3: Open cmd prompt and navigate to the project directory.
Now, enter the following commands
-> npm install -g create-react-app
-> cd my-app
-> create-react-app my-app
Step 4:-> npm start
Once we type “npm start” the application will start execution on port 3000. Open http://localhost:3000/, you will be greeted by this page.
Figure: ReactJS Tutorial – Welcome Page
This is how the file structure should look once you have successfully installed React.
my-app
├── README.md
├── node_modules
├── package.json
├── .gitignore
├── public
│ └── favicon.ico
│ └── index.html
│ └── manifest.json
└── src
└── App.css
└── App.js
└── App.test.js
└── index.css
└── index.js
└── logo.svg
└── registerServiceWorker.js
When you are creating new apps, all you need to do is update the file ‘App.js’ and the changes will be reflected automatically, other files can be added or removed. Make sure you put all CSS and JS files inside the ‘/src’ directory.
This brings us to the end of this ReactJS tutorial blog. Hope each and every aspect I discussed above is clear to you all. To learn more check out our courses on React.
If you found this blog on “ReactJS tutorial” relevant, check out the Web Development Course Training by Edureka, a trusted online learning company with a network of more than 250,000 satisfied learners spread across the globe. This Edureka course helps learners gain expertise in both fundamental and advanced topics in React enabling you to develop full-fledged, dynamic web applications on the go.
Got a question for us? Please mention it in the comments section and we will get back to you.
Original article source at: https://www.edureka.co/
1669186933
Jenkins is one of the most important tools in DevOps. I hope you have read my previous blog on What is Jenkins. In this Jenkins Tutorial blog, I will focus on Jenkins architecture and Jenkins build pipeline along with that I will show you how to create a build in Jenkins.
Before we proceed with Jenkins Tutorial, the key takeaways from the previous blog are:
Now is the correct time to understand Jenkins architecture.
Let us revise the standalone Jenkins architecture that I have explained to you in the previous blog, below diagram depicts the same.
This single Jenkins server was not enough to meet certain requirements like:
To address the above stated needs, Jenkins distributed architecture was introduced.
Jenkins uses a Master-Slave architecture to manage distributed builds. In this architecture, Master and Slave communicate through TCP/IP protocol.
Jenkins Master
Your main Jenkins server is the Master. The Master’s job is to handle:
A Slave is a Java executable that runs on a remote machine. Following are the characteristics of Jenkins Slaves:
The diagram below is self explanatory. It consists of a Jenkins Master which is managing three Jenkins Slave.
Now let us look at an example in which Jenkins is used for testing in different environments like: Ubuntu, MAC, Windows etc.
The diagram below represents the same:
The following functions are performed in the above image:
It is used to know which task Jenkins is currently executing. Often several different changes are made by several developers at once, so it is useful to know which change is getting tested or which change is sitting in the queue or which build is broken. This is where pipeline comes into picture. The Jenkins Pipeline gives you an overview of where tests are up to. In build pipeline the build as a whole is broken down into sections, such as the unit test, acceptance test, packaging, reporting and deployment phases. The pipeline phases can be executed in series or parallel, and if one phase is successful, it automatically moves on to the next phase (hence the relevance of the name “pipeline”).The below image shows how a multiple build Pipeline looks like.
Hope you have understood the theoretical concepts. Now, let’s have some fun with hands-on.
I will create a new job in Jenkins, it is a Freestyle Project. However, there are 3 more options available. Let us look at the types of build jobs available in Jenkins.
Freestyle Project:
Freestyle build jobs are general-purpose build jobs, which provides maximum flexibility. The freestyle build job is the most flexible and configurable option, and can be used for any type of project. It is relatively straightforward to set up, and many of the options we configure here also appear in other build jobs.
Multiconfiguration Job:
The “multiconfiguration project” (also referred to as a “matrix project”) allows you run the same build job on different environments. It is used for testing an application in different environments, with different databases, or even on different build machines.
Monitor an External Job:
The “Monitor an external job” build job lets you keep an eye on non-interactive processes, such as cron jobs.
Maven Project:
The “maven2/3 project” is a build job specially adapted to Maven projects. Jenkins understands Maven pom files and project structures, and can use the information gleaned from the pom file to reduce the work you need to do to set up your project.
Here is a video on Jenkins tutorial for better understanding of Jenkins. Check out this Jenkins tutorial video.
Step 1: From the Jenkins interface home, select New Item.
Step 2: Enter a name and select Freestyle project.
Step 3: This next page is where you specify the job configuration. As you’ll quickly observe, there are a number of settings available when you create a new project. On this configuration page, you also have the option to Add build step to perform extra actions like running scripts. I will execute a shell script.
This will provide you with a text box in which you can add whatever commands you need. You can use scripts to run various tasks like server maintenance, version control, reading system settings, etc. I will use this section to run a simple script.
Step 4: Save the project, and you’ll be taken to a project overview page. Here you can see information about the project, including its built history.
Step 5: Click Build Now on the left-hand side to start the build.
Step 6: To see more information, click on that build in the build history area, whereupon you’ll be taken to a page with an overview of the build information.
Step 7: The Console Output link on this page is especially useful for examining the results of the job in detail.
Step 8: If you go back to Jenkins home, you’ll see an overview of all projects and their information, including status.
Status of the build is indicated in two ways, by a weather icon and by a colored ball. The weather icon is particularly helpful as it shows you a record of multiple builds in one image.
As you can see in the above image, the sun represents that all of my builds were successful. The color of the ball gives us the status of that particular build, in the above image the color of the ball is blue which means that this particular build was successful.
In this Jenkins Tutorial, I have just given an introductory example. In my next blog, I will show you how to pull and build code from the GitHub repository using Jenkins.
If you found this Jenkins Tutorial relevant, check out the DevOps training by Edureka, a trusted online learning company with a network of more than 250,000 satisfied learners spread across the globe. The Edureka DevOps Certification Training course helps learners gain expertise in various DevOps processes and tools such as Puppet, Jenkins, Nagios and GIT for automating multiple steps in SDLC.
Got a question for us? Please mention it in the comments section and we will get back to you.
Original article source at: https://www.edureka.co/
1669013905
Learn how what and how reflection works in Kotlin with this tutorial
In programming, reflection is a programming language’s ability to inspect and interact with statically defined classes, functions, and properties during runtime.
The feature is particularly useful when you receive an object instance of an unknown class.
By using reflection, you can check if a particular object has a certain method, and call that method when it exists.
To use reflection in Kotlin, you need to include the kotlin-reflect
library in your project:
dependencies {
implementation("org.jetbrains.kotlin:kotlin-reflect:1.6.10")
}
The library contains the runtime component required for using Kotlin reflection features.
Next, let’s see how you can get class, function, and property references using Kotlin reflection feature.
Suppose you have a Dog
class with the following definitions:
class Dog(var name: String) {
fun bark() {
println("Bark!")
}
fun bark(sound: String) {
println(sound)
}
private fun hello() {
println("Hello! My name is $name")
}
}
To get the class reference in Kotlin, you can use the class literal syntax ::class
as shown below:
val classRef = Dog::class
Alternatively, you can get the class reference from an object instance by using the same ::class
syntax on the instance:
val myDog = Dog("Puppy")
val classRef = myDog::class
Getting the class reference from an object is also known as a bounded class reference.
Once you have the class reference, you can access the properties of the reference to find out more about that class.
For example, you can find the name of the class and check if that class is a data
class:
println(classRef.simpleName) // Dog
println(classRef.qualifiedName) // org.metapx.Dog
println(classRef.isData) // false
In Kotlin, the class reference is identified as the Kclass
type which stands for Kotlin class.
You can check the Kclass
documentation for all properties and methods you can access to find out about the class from its reference.
Aside from inspecting the class, the Kclass
also has some interesting abilities. The createInstance()
method, for example, allows you to create a new object from the class reference:
val secondDog = classRef.createInstance()
But keep in mind that the createInstance()
method only works when the class has a constructor with optional or no parameter.
An error will be thrown when no constructor fulfills the criteria.
You can also get access to the methods of the class reference regardless of their access modifier.
This means even private
functions of a class can be accessed from its reference.
The memberFunctions
property of Kclass
stores all methods of the class as a Collection
:
val myDog = Dog("Puppy")
val classRef = myDog::class
classRef.memberFunctions.forEach {
println(it.name)
}
The output will be as follows:
bark
bark
hello
equals
hashCode
toString
Next, you can call the class function from its reference as follows:
val myDog = Dog("Puppy")
val classRef = myDog::class
val barkRef = classRef.memberFunctions.find {
it.name == "bark"
}
barkRef?.call(myDog)
First, you need to use the find()
function to retrieve the function reference.
Then, check if the function reference is found using the null-safe call.
When the reference is found, use the call()
method from the function reference Kfunction
type.
The first argument of the call()
method must be an instance of the class reference, which is why myDog
object is passed into the method.
When your function is private
, you need to change the isAccessible
property of the function reference as true
first before calling the function:
val helloRef = classRef.memberFunctions.find {
it.name == "hello"
}
helloRef?.isAccessible = true
helloRef?.call(myDog)
And that’s how you access the methods of a class using its reference.
The properties of a Kotlin class reference can be accessed the same way you access its methods.
The properties of a class are stored in memberProperties
as a Collection
.
For example, you can get the name
property value of the myDog
instance as follows:
val myDog = Dog("Puppy")
val classRef = myDog::class
val nameRef = classRef.memberProperties.find {
it.name == "name"
}
println(nameRef?.getter?.call(myDog)) // Puppy
A property reference is an instance of KProperty
type. The value of the property is retrieved by calling the getter()
method.
To change the value of the name
property, you need to cast the property into KMutableProperty
first as shown below:
val myDog = Dog("Puppy")
val classRef = myDog::class
val nameRef = classRef.memberProperties.find {
it.name == "name"
} as KMutableProperty<*>?
nameRef?.setter?.call(myDog, "Jacob")
println(myDog.name) // Jacob
The KMutableProperty
holds the setter()
method, which you need to call to set the value of the property.
Now you’ve learned how to access methods and properties from a class reference.
Next, let’s look at how you can get a function reference with Kotlin reflection
You can get a reference to a named Kotlin function by using the ::
operator.
Here’s an example:
fun hello() {
println("Hello World!")
}
val funRef = ::hello
funRef() // Hello World!
The funRef
above will be an instance of Kfunction
type, which represents a function with introspection capabilities.
Now you’ve learned what the Kotlin reflection feature is and how it works with some examples. Reflection is a powerful feature that’s only used for specific requirements.
Because of its ability to find inspect a source code, it’s frequently used when developing a framework or library for further development.
JUnit and Spring frameworks are notable for using reflection in their source code.
The library author won’t know the classes and functions created by the user. Reflection allows the framework to deal with classes and functions without knowing about them in advance.
Original article source at: https://sebhastian.com/