Amy  Waelchi

Amy Waelchi

1676368560

How to Use Autograd in PyTorch To Solve Regression Problem

In this PyTorch tutorial we learn about How to Use Autograd in PyTorch To Solve Regression Problem. We usually use PyTorch to build a neural network. However, PyTorch can do more than this. Because PyTorch is also a tensor library with automatic differentiation capability, you can easily use it to solve a numerical optimization problem with gradient descent. In this post, you will learn how PyTorch automatic differentiation engine, autograd, works.

After finishing this tutorial, you will learn:

  • What is autograd in PyTorch
  • How to make use of autograd and an optimizer to solve an optimization problem

Let’s get started.

Overview

This tutorial is in three parts; they are:

  • Autograd in PyTorch
  • Using Autograd for Polynomial Regression
  • Using Autograd to Solve a Math Puzzle

Autograd in PyTorch

In PyTorch, you can create tensors as variables or constants and build an expression with them. The expression is essentially a function of the variable tensors. Therefore, you may derive its derivative function, i.e., the differentiation or the gradient. This is the foundation of the training loop in a deep learning model. PyTorch comes with this feature at its core.

It is easier to explain autograd with an example. In PyTorch, you can create a constant matrix as follows:


import torch
 
x = torch.tensor([1, 2, 3])
print(x)
print(x.shape)
print(x.dtype)

The above prints:


tensor([1, 2, 3])
torch.Size([3])
torch.int64

This creates an integer vector (in the form of a PyTorch tensor). This vector can work like a NumPy vector in most cases. For example, you can do x+x or 2*x, and the result is just what you would expect. PyTorch comes with many functions for array manipulation that match NumPy, such as torch.transpose or torch.concatenate.

But this tensor is not assumed to be a variable for a function in the sense that differentiation with it is not supported. We can create tensors that works like a variable with an extra option:


import torch
 
x = torch.tensor([1., 2., 3.], requires_grad=True)
print(x)
print(x.shape)
print(x.dtype)

This will print:

tensor([1., 2., 3.], requires_grad=True)
torch.Size([3])
torch.float32

Note that, in the above, we created a tensor of floating point values. It is required because differentiation requires floating points not integers.

The operations (such as x+x and 2*x) can still be applied, but in this case, the tensor will remember how it got its values. We can demonstrate this feature in the following:


import torch
 
x = torch.tensor(3.6, requires_grad=True)
y = x * x
y.backward()
print(x.grad)

This prints:

tensor(7.2000)

What it does is the following: This defined a variable x (with value 3.6) and then compute y=x*x or �=�2. Then we ask for the differentiation of �. Since � obtained its value from �, we can find the derivative ���� at x.grad, in the form of a tensor, immediately after we run y.backward(). You know �=�2 means �′=2�. Hence the output would give you a value of 3.6×2=7.2.

Using Autograd for Polynomial Regression

How is this feature in PyTorch helpful? Let’s consider a case where you have a polynomial in the form of �=�(�), and you are given several (�,�) samples. How can you recover the polynomial �(�)? One way is to assume a random coefficient for the polynomial and feed in the samples (�,�). If the polynomial is found, you should see the value of � matches �(�). The closer they are, the closer your estimate is to the correct polynomial.

This is indeed a numerical optimization problem such that you want to minimize the difference between � and �(�). You can use gradient descent to solve it.

Let’s consider an example. You can build a polynomial �(�)=�2+2�+3 in NumPy as follows:


import numpy as np
 
polynomial = np.poly1d([1, 2, 3])
print(polynomial)

This prints:

   2
1 x + 2 x + 3

You may use the polynomial as a function, such as:

print(polynomial(1.5))

And this prints 8.25, for (1.5)2+2×(1.5)+3=8.25.

Now you can generate a number of samples from this function using NumPy:


N = 20   # number of samples
 
# Generate random samples roughly between -10 to +10
X = np.random.randn(N,1) * 5
Y = polynomial(X)

In the above, both X and Y are NumPy arrays of the shape (20,1), and they are related as �=�(�) for the polynomial �(�).

Now, assume you do not know what the polynomial is, except it is quadratic. And you want to recover the coefficients. Since a quadratic polynomial is in the form of ��2+��+�, you have three unknowns to find. You can find them using the gradient descent algorithm you implement or an existing gradient descent optimizer. The following demonstrates how it works:


import torch
 
# Assume samples X and Y are prepared elsewhere
 
XX = np.hstack([X*X, X, np.ones_like(X)])
 
w = torch.randn(3, 1, requires_grad=True)  # the 3 coefficients
x = torch.tensor(XX, dtype=torch.float32)  # input sample
y = torch.tensor(Y, dtype=torch.float32)   # output sample
optimizer = torch.optim.NAdam([w], lr=0.01)
print(w)
 
for _ in range(1000):
    optimizer.zero_grad()
    y_pred = x @ w
    mse = torch.mean(torch.square(y - y_pred))
    mse.backward()
    optimizer.step()
 
print(w)

The print statement before the for loop gives three random numbers, such as:


tensor([[1.3827],
        [0.8629],
        [0.2357]], requires_grad=True)

But the one after the for loop gives you the coefficients very close to that in the polynomial:


tensor([[1.0004],
        [1.9924],
        [2.9159]], requires_grad=True)

What the above code does is the following: First, it creates a variable vector w of 3 values, namely the coefficients �,�,�. Then you create an array of shape (�,3), in which � is the number of samples in the array X. This array has 3 columns, which are the values of �2, �, and 1, respectively. Such an array is built from the vector X using the  np.hstack() function. Similarly, we build the TensorFlow constant y from the NumPy array Y.

Afterward, you use a for loop to run gradient descent in 1,000 iterations. In each iteration, you compute �� in matrix form to find ��2+��+� and assign it to the variable y_pred. Then, compare y and y_pred and find the mean square error. Next, derive the gradient, i.e., the rate of change of the mean square error with respect to the coefficients w, using the backward() function. And based on this gradient, you use gradient descent to update w via the optimizer.

In essence, the above code is to find the coefficients w that minimizes the mean square error.

Putting everything together, the following is the complete code:


import numpy as np
import torch
 
polynomial = np.poly1d([1, 2, 3])
N = 20   # number of samples
 
# Generate random samples roughly between -10 to +10
X = np.random.randn(N,1) * 5
Y = polynomial(X)
 
# Prepare input as an array of shape (N,3)
XX = np.hstack([X*X, X, np.ones_like(X)])
 
# Prepare tensors
w = torch.randn(3, 1, requires_grad=True)  # the 3 coefficients
x = torch.tensor(XX, dtype=torch.float32)  # input sample
y = torch.tensor(Y, dtype=torch.float32)   # output sample
optimizer = torch.optim.NAdam([w], lr=0.01)
print(w)
 
# Run optimizer
for _ in range(1000):
    optimizer.zero_grad()
    y_pred = x @ w
    mse = torch.mean(torch.square(y - y_pred))
    mse.backward()
    optimizer.step()
 
print(w)

Using Autograd to Solve a Math Puzzle

In the above, 20 samples were used, which is more than enough to fit a quadratic equation. You may use gradient descent to solve some math puzzles as well. For example, the following problem:

[ A ]  +  [ B ]  =  9
  +         -
[ C ]  -  [ D ]  =  1
  =         =
  8         2

In other words,  to find the values of �,�,�,� such that:

�+�=9�–�=1�+�=8�–�=2

This can also be solved using autograd, as follows:


import random
import torch
 
A = torch.tensor(random.random(), requires_grad=True)
B = torch.tensor(random.random(), requires_grad=True)
C = torch.tensor(random.random(), requires_grad=True)
D = torch.tensor(random.random(), requires_grad=True)
 
# Gradient descent loop
EPOCHS = 2000
optimizer = torch.optim.NAdam([A, B, C, D], lr=0.01)
for _ in range(EPOCHS):
    y1 = A + B - 9
    y2 = C - D - 1
    y3 = A + C - 8
    y4 = B - D - 2
    sqerr = y1*y1 + y2*y2 + y3*y3 + y4*y4
    optimizer.zero_grad()
    sqerr.backward()
    optimizer.step()
 
print(A)
print(B)
print(C)
print(D)

There can be multiple solutions to this problem. One solution is the following:


tensor(4.7191, requires_grad=True)
tensor(4.2808, requires_grad=True)
tensor(3.2808, requires_grad=True)
tensor(2.2808, requires_grad=True)

Which means �=4.72, �=4.28, �=3.28, and �=2.28. You can verify this solution fits the problem.

The above code defines the four unknowns as variables with a random initial value. Then you compute the result of the four equations and compare it to the expected answer. You then sum up the squared error and ask PyTorch’s optimizer to minimize it. The minimum possible square error is zero, attained when our solution exactly fits the problem.

Note the way PyTorch produces the gradient: You ask the gradient of sqerr, which it noticed that among other things, only A, B, C, and D are its dependencies that requires_grad=True. Hence four gradients are found. You then apply each gradient to the respective variables in each iteration via the optimizer.

Further Reading

This section provides more resources on the topic if you are looking to go deeper.

Articles:

Summary

In this post, we demonstrated how PyTorch’s automatic differentiation works. This is the building block for carrying out deep learning training. Specifically, you learned:

  • What is automatic differentiation in PyTorch
  • How you can use gradient tape to carry out automatic differentiation
  • How you can use automatic differentiation to solve an optimization problem

Original article sourced at: https://machinelearningmastery.com

#pytorch 

What is GEEK

Buddha Community

How to Use Autograd in PyTorch To Solve Regression Problem
Chloe  Butler

Chloe Butler

1667425440

Pdf2gerb: Perl Script Converts PDF Files to Gerber format

pdf2gerb

Perl script converts PDF files to Gerber format

Pdf2Gerb generates Gerber 274X photoplotting and Excellon drill files from PDFs of a PCB. Up to three PDFs are used: the top copper layer, the bottom copper layer (for 2-sided PCBs), and an optional silk screen layer. The PDFs can be created directly from any PDF drawing software, or a PDF print driver can be used to capture the Print output if the drawing software does not directly support output to PDF.

The general workflow is as follows:

  1. Design the PCB using your favorite CAD or drawing software.
  2. Print the top and bottom copper and top silk screen layers to a PDF file.
  3. Run Pdf2Gerb on the PDFs to create Gerber and Excellon files.
  4. Use a Gerber viewer to double-check the output against the original PCB design.
  5. Make adjustments as needed.
  6. Submit the files to a PCB manufacturer.

Please note that Pdf2Gerb does NOT perform DRC (Design Rule Checks), as these will vary according to individual PCB manufacturer conventions and capabilities. Also note that Pdf2Gerb is not perfect, so the output files must always be checked before submitting them. As of version 1.6, Pdf2Gerb supports most PCB elements, such as round and square pads, round holes, traces, SMD pads, ground planes, no-fill areas, and panelization. However, because it interprets the graphical output of a Print function, there are limitations in what it can recognize (or there may be bugs).

See docs/Pdf2Gerb.pdf for install/setup, config, usage, and other info.


pdf2gerb_cfg.pm

#Pdf2Gerb config settings:
#Put this file in same folder/directory as pdf2gerb.pl itself (global settings),
#or copy to another folder/directory with PDFs if you want PCB-specific settings.
#There is only one user of this file, so we don't need a custom package or namespace.
#NOTE: all constants defined in here will be added to main namespace.
#package pdf2gerb_cfg;

use strict; #trap undef vars (easier debug)
use warnings; #other useful info (easier debug)


##############################################################################################
#configurable settings:
#change values here instead of in main pfg2gerb.pl file

use constant WANT_COLORS => ($^O !~ m/Win/); #ANSI colors no worky on Windows? this must be set < first DebugPrint() call

#just a little warning; set realistic expectations:
#DebugPrint("${\(CYAN)}Pdf2Gerb.pl ${\(VERSION)}, $^O O/S\n${\(YELLOW)}${\(BOLD)}${\(ITALIC)}This is EXPERIMENTAL software.  \nGerber files MAY CONTAIN ERRORS.  Please CHECK them before fabrication!${\(RESET)}", 0); #if WANT_DEBUG

use constant METRIC => FALSE; #set to TRUE for metric units (only affect final numbers in output files, not internal arithmetic)
use constant APERTURE_LIMIT => 0; #34; #max #apertures to use; generate warnings if too many apertures are used (0 to not check)
use constant DRILL_FMT => '2.4'; #'2.3'; #'2.4' is the default for PCB fab; change to '2.3' for CNC

use constant WANT_DEBUG => 0; #10; #level of debug wanted; higher == more, lower == less, 0 == none
use constant GERBER_DEBUG => 0; #level of debug to include in Gerber file; DON'T USE FOR FABRICATION
use constant WANT_STREAMS => FALSE; #TRUE; #save decompressed streams to files (for debug)
use constant WANT_ALLINPUT => FALSE; #TRUE; #save entire input stream (for debug ONLY)

#DebugPrint(sprintf("${\(CYAN)}DEBUG: stdout %d, gerber %d, want streams? %d, all input? %d, O/S: $^O, Perl: $]${\(RESET)}\n", WANT_DEBUG, GERBER_DEBUG, WANT_STREAMS, WANT_ALLINPUT), 1);
#DebugPrint(sprintf("max int = %d, min int = %d\n", MAXINT, MININT), 1); 

#define standard trace and pad sizes to reduce scaling or PDF rendering errors:
#This avoids weird aperture settings and replaces them with more standardized values.
#(I'm not sure how photoplotters handle strange sizes).
#Fewer choices here gives more accurate mapping in the final Gerber files.
#units are in inches
use constant TOOL_SIZES => #add more as desired
(
#round or square pads (> 0) and drills (< 0):
    .010, -.001,  #tiny pads for SMD; dummy drill size (too small for practical use, but needed so StandardTool will use this entry)
    .031, -.014,  #used for vias
    .041, -.020,  #smallest non-filled plated hole
    .051, -.025,
    .056, -.029,  #useful for IC pins
    .070, -.033,
    .075, -.040,  #heavier leads
#    .090, -.043,  #NOTE: 600 dpi is not high enough resolution to reliably distinguish between .043" and .046", so choose 1 of the 2 here
    .100, -.046,
    .115, -.052,
    .130, -.061,
    .140, -.067,
    .150, -.079,
    .175, -.088,
    .190, -.093,
    .200, -.100,
    .220, -.110,
    .160, -.125,  #useful for mounting holes
#some additional pad sizes without holes (repeat a previous hole size if you just want the pad size):
    .090, -.040,  #want a .090 pad option, but use dummy hole size
    .065, -.040, #.065 x .065 rect pad
    .035, -.040, #.035 x .065 rect pad
#traces:
    .001,  #too thin for real traces; use only for board outlines
    .006,  #minimum real trace width; mainly used for text
    .008,  #mainly used for mid-sized text, not traces
    .010,  #minimum recommended trace width for low-current signals
    .012,
    .015,  #moderate low-voltage current
    .020,  #heavier trace for power, ground (even if a lighter one is adequate)
    .025,
    .030,  #heavy-current traces; be careful with these ones!
    .040,
    .050,
    .060,
    .080,
    .100,
    .120,
);
#Areas larger than the values below will be filled with parallel lines:
#This cuts down on the number of aperture sizes used.
#Set to 0 to always use an aperture or drill, regardless of size.
use constant { MAX_APERTURE => max((TOOL_SIZES)) + .004, MAX_DRILL => -min((TOOL_SIZES)) + .004 }; #max aperture and drill sizes (plus a little tolerance)
#DebugPrint(sprintf("using %d standard tool sizes: %s, max aper %.3f, max drill %.3f\n", scalar((TOOL_SIZES)), join(", ", (TOOL_SIZES)), MAX_APERTURE, MAX_DRILL), 1);

#NOTE: Compare the PDF to the original CAD file to check the accuracy of the PDF rendering and parsing!
#for example, the CAD software I used generated the following circles for holes:
#CAD hole size:   parsed PDF diameter:      error:
#  .014                .016                +.002
#  .020                .02267              +.00267
#  .025                .026                +.001
#  .029                .03167              +.00267
#  .033                .036                +.003
#  .040                .04267              +.00267
#This was usually ~ .002" - .003" too big compared to the hole as displayed in the CAD software.
#To compensate for PDF rendering errors (either during CAD Print function or PDF parsing logic), adjust the values below as needed.
#units are pixels; for example, a value of 2.4 at 600 dpi = .0004 inch, 2 at 600 dpi = .0033"
use constant
{
    HOLE_ADJUST => -0.004 * 600, #-2.6, #holes seemed to be slightly oversized (by .002" - .004"), so shrink them a little
    RNDPAD_ADJUST => -0.003 * 600, #-2, #-2.4, #round pads seemed to be slightly oversized, so shrink them a little
    SQRPAD_ADJUST => +0.001 * 600, #+.5, #square pads are sometimes too small by .00067, so bump them up a little
    RECTPAD_ADJUST => 0, #(pixels) rectangular pads seem to be okay? (not tested much)
    TRACE_ADJUST => 0, #(pixels) traces seemed to be okay?
    REDUCE_TOLERANCE => .001, #(inches) allow this much variation when reducing circles and rects
};

#Also, my CAD's Print function or the PDF print driver I used was a little off for circles, so define some additional adjustment values here:
#Values are added to X/Y coordinates; units are pixels; for example, a value of 1 at 600 dpi would be ~= .002 inch
use constant
{
    CIRCLE_ADJUST_MINX => 0,
    CIRCLE_ADJUST_MINY => -0.001 * 600, #-1, #circles were a little too high, so nudge them a little lower
    CIRCLE_ADJUST_MAXX => +0.001 * 600, #+1, #circles were a little too far to the left, so nudge them a little to the right
    CIRCLE_ADJUST_MAXY => 0,
    SUBST_CIRCLE_CLIPRECT => FALSE, #generate circle and substitute for clip rects (to compensate for the way some CAD software draws circles)
    WANT_CLIPRECT => TRUE, #FALSE, #AI doesn't need clip rect at all? should be on normally?
    RECT_COMPLETION => FALSE, #TRUE, #fill in 4th side of rect when 3 sides found
};

#allow .012 clearance around pads for solder mask:
#This value effectively adjusts pad sizes in the TOOL_SIZES list above (only for solder mask layers).
use constant SOLDER_MARGIN => +.012; #units are inches

#line join/cap styles:
use constant
{
    CAP_NONE => 0, #butt (none); line is exact length
    CAP_ROUND => 1, #round cap/join; line overhangs by a semi-circle at either end
    CAP_SQUARE => 2, #square cap/join; line overhangs by a half square on either end
    CAP_OVERRIDE => FALSE, #cap style overrides drawing logic
};
    
#number of elements in each shape type:
use constant
{
    RECT_SHAPELEN => 6, #x0, y0, x1, y1, count, "rect" (start, end corners)
    LINE_SHAPELEN => 6, #x0, y0, x1, y1, count, "line" (line seg)
    CURVE_SHAPELEN => 10, #xstart, ystart, x0, y0, x1, y1, xend, yend, count, "curve" (bezier 2 points)
    CIRCLE_SHAPELEN => 5, #x, y, 5, count, "circle" (center + radius)
};
#const my %SHAPELEN =
#Readonly my %SHAPELEN =>
our %SHAPELEN =
(
    rect => RECT_SHAPELEN,
    line => LINE_SHAPELEN,
    curve => CURVE_SHAPELEN,
    circle => CIRCLE_SHAPELEN,
);

#panelization:
#This will repeat the entire body the number of times indicated along the X or Y axes (files grow accordingly).
#Display elements that overhang PCB boundary can be squashed or left as-is (typically text or other silk screen markings).
#Set "overhangs" TRUE to allow overhangs, FALSE to truncate them.
#xpad and ypad allow margins to be added around outer edge of panelized PCB.
use constant PANELIZE => {'x' => 1, 'y' => 1, 'xpad' => 0, 'ypad' => 0, 'overhangs' => TRUE}; #number of times to repeat in X and Y directions

# Set this to 1 if you need TurboCAD support.
#$turboCAD = FALSE; #is this still needed as an option?

#CIRCAD pad generation uses an appropriate aperture, then moves it (stroke) "a little" - we use this to find pads and distinguish them from PCB holes. 
use constant PAD_STROKE => 0.3; #0.0005 * 600; #units are pixels
#convert very short traces to pads or holes:
use constant TRACE_MINLEN => .001; #units are inches
#use constant ALWAYS_XY => TRUE; #FALSE; #force XY even if X or Y doesn't change; NOTE: needs to be TRUE for all pads to show in FlatCAM and ViewPlot
use constant REMOVE_POLARITY => FALSE; #TRUE; #set to remove subtractive (negative) polarity; NOTE: must be FALSE for ground planes

#PDF uses "points", each point = 1/72 inch
#combined with a PDF scale factor of .12, this gives 600 dpi resolution (1/72 * .12 = 600 dpi)
use constant INCHES_PER_POINT => 1/72; #0.0138888889; #multiply point-size by this to get inches

# The precision used when computing a bezier curve. Higher numbers are more precise but slower (and generate larger files).
#$bezierPrecision = 100;
use constant BEZIER_PRECISION => 36; #100; #use const; reduced for faster rendering (mainly used for silk screen and thermal pads)

# Ground planes and silk screen or larger copper rectangles or circles are filled line-by-line using this resolution.
use constant FILL_WIDTH => .01; #fill at most 0.01 inch at a time

# The max number of characters to read into memory
use constant MAX_BYTES => 10 * M; #bumped up to 10 MB, use const

use constant DUP_DRILL1 => TRUE; #FALSE; #kludge: ViewPlot doesn't load drill files that are too small so duplicate first tool

my $runtime = time(); #Time::HiRes::gettimeofday(); #measure my execution time

print STDERR "Loaded config settings from '${\(__FILE__)}'.\n";
1; #last value must be truthful to indicate successful load


#############################################################################################
#junk/experiment:

#use Package::Constants;
#use Exporter qw(import); #https://perldoc.perl.org/Exporter.html

#my $caller = "pdf2gerb::";

#sub cfg
#{
#    my $proto = shift;
#    my $class = ref($proto) || $proto;
#    my $settings =
#    {
#        $WANT_DEBUG => 990, #10; #level of debug wanted; higher == more, lower == less, 0 == none
#    };
#    bless($settings, $class);
#    return $settings;
#}

#use constant HELLO => "hi there2"; #"main::HELLO" => "hi there";
#use constant GOODBYE => 14; #"main::GOODBYE" => 12;

#print STDERR "read cfg file\n";

#our @EXPORT_OK = Package::Constants->list(__PACKAGE__); #https://www.perlmonks.org/?node_id=1072691; NOTE: "_OK" skips short/common names

#print STDERR scalar(@EXPORT_OK) . " consts exported:\n";
#foreach(@EXPORT_OK) { print STDERR "$_\n"; }
#my $val = main::thing("xyz");
#print STDERR "caller gave me $val\n";
#foreach my $arg (@ARGV) { print STDERR "arg $arg\n"; }

Download Details:

Author: swannman
Source Code: https://github.com/swannman/pdf2gerb

License: GPL-3.0 license

#perl 

Vincent Lab

Vincent Lab

1605176864

How to do Problem Solving as a Developer

In this video, I will be talking about problem-solving as a developer.

#problem solving skills #problem solving how to #problem solving strategies #problem solving #developer

Why Use WordPress? What Can You Do With WordPress?

Can you use WordPress for anything other than blogging? To your surprise, yes. WordPress is more than just a blogging tool, and it has helped thousands of websites and web applications to thrive. The use of WordPress powers around 40% of online projects, and today in our blog, we would visit some amazing uses of WordPress other than blogging.
What Is The Use Of WordPress?

WordPress is the most popular website platform in the world. It is the first choice of businesses that want to set a feature-rich and dynamic Content Management System. So, if you ask what WordPress is used for, the answer is – everything. It is a super-flexible, feature-rich and secure platform that offers everything to build unique websites and applications. Let’s start knowing them:

1. Multiple Websites Under A Single Installation
WordPress Multisite allows you to develop multiple sites from a single WordPress installation. You can download WordPress and start building websites you want to launch under a single server. Literally speaking, you can handle hundreds of sites from one single dashboard, which now needs applause.
It is a highly efficient platform that allows you to easily run several websites under the same login credentials. One of the best things about WordPress is the themes it has to offer. You can simply download them and plugin for various sites and save space on sites without losing their speed.

2. WordPress Social Network
WordPress can be used for high-end projects such as Social Media Network. If you don’t have the money and patience to hire a coder and invest months in building a feature-rich social media site, go for WordPress. It is one of the most amazing uses of WordPress. Its stunning CMS is unbeatable. And you can build sites as good as Facebook or Reddit etc. It can just make the process a lot easier.
To set up a social media network, you would have to download a WordPress Plugin called BuddyPress. It would allow you to connect a community page with ease and would provide all the necessary features of a community or social media. It has direct messaging, activity stream, user groups, extended profiles, and so much more. You just have to download and configure it.
If BuddyPress doesn’t meet all your needs, don’t give up on your dreams. You can try out WP Symposium or PeepSo. There are also several themes you can use to build a social network.

3. Create A Forum For Your Brand’s Community
Communities are very important for your business. They help you stay in constant connection with your users and consumers. And allow you to turn them into a loyal customer base. Meanwhile, there are many good technologies that can be used for building a community page – the good old WordPress is still the best.
It is the best community development technology. If you want to build your online community, you need to consider all the amazing features you get with WordPress. Plugins such as BB Press is an open-source, template-driven PHP/ MySQL forum software. It is very simple and doesn’t hamper the experience of the website.
Other tools such as wpFoRo and Asgaros Forum are equally good for creating a community blog. They are lightweight tools that are easy to manage and integrate with your WordPress site easily. However, there is only one tiny problem; you need to have some technical knowledge to build a WordPress Community blog page.

4. Shortcodes
Since we gave you a problem in the previous section, we would also give you a perfect solution for it. You might not know to code, but you have shortcodes. Shortcodes help you execute functions without having to code. It is an easy way to build an amazing website, add new features, customize plugins easily. They are short lines of code, and rather than memorizing multiple lines; you can have zero technical knowledge and start building a feature-rich website or application.
There are also plugins like Shortcoder, Shortcodes Ultimate, and the Basics available on WordPress that can be used, and you would not even have to remember the shortcodes.

5. Build Online Stores
If you still think about why to use WordPress, use it to build an online store. You can start selling your goods online and start selling. It is an affordable technology that helps you build a feature-rich eCommerce store with WordPress.
WooCommerce is an extension of WordPress and is one of the most used eCommerce solutions. WooCommerce holds a 28% share of the global market and is one of the best ways to set up an online store. It allows you to build user-friendly and professional online stores and has thousands of free and paid extensions. Moreover as an open-source platform, and you don’t have to pay for the license.
Apart from WooCommerce, there are Easy Digital Downloads, iThemes Exchange, Shopify eCommerce plugin, and so much more available.

6. Security Features
WordPress takes security very seriously. It offers tons of external solutions that help you in safeguarding your WordPress site. While there is no way to ensure 100% security, it provides regular updates with security patches and provides several plugins to help with backups, two-factor authorization, and more.
By choosing hosting providers like WP Engine, you can improve the security of the website. It helps in threat detection, manage patching and updates, and internal security audits for the customers, and so much more.

Read More

#use of wordpress #use wordpress for business website #use wordpress for website #what is use of wordpress #why use wordpress #why use wordpress to build a website

How to Use Autograd in PyTorch To Solve Regression Problem

In this PyTorch tutorial we will learn about How to Use Autograd in PyTorch To Solve Regression Problem. We usually use PyTorch to build a neural network. However, PyTorch can do more than this. Because PyTorch is also a tensor library with automatic differentiation capability, you can easily use it to solve a numerical optimization problem with gradient descent. In this post, you will learn how PyTorch automatic differentiation engine, autograd, works.

After finishing this tutorial, you will learn:

  • What is autograd in PyTorch
  • How to make use of autograd and an optimizer to solve an optimization problem

Let’s get started.

Overview

This tutorial is in three parts; they are:

  • Autograd in PyTorch
  • Using Autograd for Polynomial Regression
  • Using Autograd to Solve a Math Puzzle

Autograd in PyTorch

In PyTorch, you can create tensors as variables or constants and build an expression with them. The expression is essentially a function of the variable tensors. Therefore, you may derive its derivative function, i.e., the differentiation or the gradient. This is the foundation of the training loop in a deep learning model. PyTorch comes with this feature at its core.

It is easier to explain autograd with an example. In PyTorch, you can create a constant matrix as follows:

import torch
 
x = torch.tensor([1, 2, 3])
print(x)
print(x.shape)
print(x.dtype)

The above prints:


tensor([1, 2, 3])
torch.Size([3])
torch.int64

This creates an integer vector (in the form of a PyTorch tensor). This vector can work like a NumPy vector in most cases. For example, you can do x+x or 2*x, and the result is just what you would expect. PyTorch comes with many functions for array manipulation that match NumPy, such as torch.transpose or torch.concatenate.

But this tensor is not assumed to be a variable for a function in the sense that differentiation with it is not supported. We can create tensors that works like a variable with an extra option:

import torch
 
x = torch.tensor([1., 2., 3.], requires_grad=True)
print(x)
print(x.shape)
print(x.dtype)

This will print:

tensor([1., 2., 3.], requires_grad=True)
torch.Size([3])
torch.float32

Note that, in the above, we created a tensor of floating point values. It is required because differentiation requires floating points not integers.

The operations (such as x+x and 2*x) can still be applied, but in this case, the tensor will remember how it got its values. We can demonstrate this feature in the following:


import torch
 
x = torch.tensor(3.6, requires_grad=True)
y = x * x
y.backward()
print(x.grad)

This prints:

tensor(7.2000)

What it does is the following: This defined a variable x (with value 3.6) and then compute y=x*x or �=�2. Then we ask for the differentiation of �. Since � obtained its value from �, we can find the derivative ���� at x.grad, in the form of a tensor, immediately after we run y.backward(). You know �=�2 means �′=2�. Hence the output would give you a value of 3.6×2=7.2.

Using Autograd for Polynomial Regression

How is this feature in PyTorch helpful? Let’s consider a case where you have a polynomial in the form of �=�(�), and you are given several (�,�) samples. How can you recover the polynomial �(�)? One way is to assume a random coefficient for the polynomial and feed in the samples (�,�). If the polynomial is found, you should see the value of � matches �(�). The closer they are, the closer your estimate is to the correct polynomial.

This is indeed a numerical optimization problem such that you want to minimize the difference between � and �(�). You can use gradient descent to solve it.

Let’s consider an example. You can build a polynomial �(�)=�2+2�+3 in NumPy as follows:


import numpy as np
 
polynomial = np.poly1d([1, 2, 3])
print(polynomial)

This prints:

   2
1 x + 2 x + 3

You may use the polynomial as a function, such as:

print(polynomial(1.5))

And this prints 8.25, for (1.5)2+2×(1.5)+3=8.25.

Now you can generate a number of samples from this function using NumPy:

N = 20   # number of samples
 
# Generate random samples roughly between -10 to +10
X = np.random.randn(N,1) * 5
Y = polynomial(X)

In the above, both X and Y are NumPy arrays of the shape (20,1), and they are related as �=�(�) for the polynomial �(�).

Now, assume you do not know what the polynomial is, except it is quadratic. And you want to recover the coefficients. Since a quadratic polynomial is in the form of ��2+��+�, you have three unknowns to find. You can find them using the gradient descent algorithm you implement or an existing gradient descent optimizer. The following demonstrates how it works:


import torch
 
# Assume samples X and Y are prepared elsewhere
 
XX = np.hstack([X*X, X, np.ones_like(X)])
 
w = torch.randn(3, 1, requires_grad=True)  # the 3 coefficients
x = torch.tensor(XX, dtype=torch.float32)  # input sample
y = torch.tensor(Y, dtype=torch.float32)   # output sample
optimizer = torch.optim.NAdam([w], lr=0.01)
print(w)
 
for _ in range(1000):
    optimizer.zero_grad()
    y_pred = x @ w
    mse = torch.mean(torch.square(y - y_pred))
    mse.backward()
    optimizer.step()
 
print(w)

The print statement before the for loop gives three random numbers, such as:


tensor([[1.3827],
        [0.8629],
        [0.2357]], requires_grad=True)

But the one after the for loop gives you the coefficients very close to that in the polynomial:


tensor([[1.0004],
        [1.9924],
        [2.9159]], requires_grad=True)

What the above code does is the following: First, it creates a variable vector w of 3 values, namely the coefficients �,�,�. Then you create an array of shape (�,3), in which � is the number of samples in the array X. This array has 3 columns, which are the values of �2, �, and 1, respectively. Such an array is built from the vector X using the  np.hstack() function. Similarly, we build the TensorFlow constant y from the NumPy array Y.

Afterward, you use a for loop to run gradient descent in 1,000 iterations. In each iteration, you compute �� in matrix form to find ��2+��+� and assign it to the variable y_pred. Then, compare y and y_pred and find the mean square error. Next, derive the gradient, i.e., the rate of change of the mean square error with respect to the coefficients w, using the backward() function. And based on this gradient, you use gradient descent to update w via the optimizer.

In essence, the above code is to find the coefficients w that minimizes the mean square error.

Putting everything together, the following is the complete code:


import numpy as np
import torch
 
polynomial = np.poly1d([1, 2, 3])
N = 20   # number of samples
 
# Generate random samples roughly between -10 to +10
X = np.random.randn(N,1) * 5
Y = polynomial(X)
 
# Prepare input as an array of shape (N,3)
XX = np.hstack([X*X, X, np.ones_like(X)])
 
# Prepare tensors
w = torch.randn(3, 1, requires_grad=True)  # the 3 coefficients
x = torch.tensor(XX, dtype=torch.float32)  # input sample
y = torch.tensor(Y, dtype=torch.float32)   # output sample
optimizer = torch.optim.NAdam([w], lr=0.01)
print(w)
 
# Run optimizer
for _ in range(1000):
    optimizer.zero_grad()
    y_pred = x @ w
    mse = torch.mean(torch.square(y - y_pred))
    mse.backward()
    optimizer.step()
 
print(w)

Using Autograd to Solve a Math Puzzle

In the above, 20 samples were used, which is more than enough to fit a quadratic equation. You may use gradient descent to solve some math puzzles as well. For example, the following problem:

[ A ]  +  [ B ]  =  9
  +         -
[ C ]  -  [ D ]  =  1
  =         =
  8         2

In other words,  to find the values of �,�,�,� such that:

�+�=9�–�=1�+�=8�–�=2

This can also be solved using autograd, as follows:


import random
import torch
 
A = torch.tensor(random.random(), requires_grad=True)
B = torch.tensor(random.random(), requires_grad=True)
C = torch.tensor(random.random(), requires_grad=True)
D = torch.tensor(random.random(), requires_grad=True)
 
# Gradient descent loop
EPOCHS = 2000
optimizer = torch.optim.NAdam([A, B, C, D], lr=0.01)
for _ in range(EPOCHS):
    y1 = A + B - 9
    y2 = C - D - 1
    y3 = A + C - 8
    y4 = B - D - 2
    sqerr = y1*y1 + y2*y2 + y3*y3 + y4*y4
    optimizer.zero_grad()
    sqerr.backward()
    optimizer.step()
 
print(A)
print(B)
print(C)
print(D)

There can be multiple solutions to this problem. One solution is the following:

tensor(4.7191, requires_grad=True)
tensor(4.2808, requires_grad=True)
tensor(3.2808, requires_grad=True)
tensor(2.2808, requires_grad=True)

Which means �=4.72, �=4.28, �=3.28, and �=2.28. You can verify this solution fits the problem.

The above code defines the four unknowns as variables with a random initial value. Then you compute the result of the four equations and compare it to the expected answer. You then sum up the squared error and ask PyTorch’s optimizer to minimize it. The minimum possible square error is zero, attained when our solution exactly fits the problem.

Note the way PyTorch produces the gradient: You ask the gradient of sqerr, which it noticed that among other things, only A, B, C, and D are its dependencies that requires_grad=True. Hence four gradients are found. You then apply each gradient to the respective variables in each iteration via the optimizer.

Further Reading

This section provides more resources on the topic if you are looking to go deeper.

Articles:

Summary

In this post, we demonstrated how PyTorch’s automatic differentiation works. This is the building block for carrying out deep learning training. Specifically, you learned:

  • What is automatic differentiation in PyTorch
  • How you can use gradient tape to carry out automatic differentiation
  • How you can use automatic differentiation to solve an optimization problem

Original article sourced at: https://machinelearningmastery.com

#pytorch 

Amy  Waelchi

Amy Waelchi

1676368560

How to Use Autograd in PyTorch To Solve Regression Problem

In this PyTorch tutorial we learn about How to Use Autograd in PyTorch To Solve Regression Problem. We usually use PyTorch to build a neural network. However, PyTorch can do more than this. Because PyTorch is also a tensor library with automatic differentiation capability, you can easily use it to solve a numerical optimization problem with gradient descent. In this post, you will learn how PyTorch automatic differentiation engine, autograd, works.

After finishing this tutorial, you will learn:

  • What is autograd in PyTorch
  • How to make use of autograd and an optimizer to solve an optimization problem

Let’s get started.

Overview

This tutorial is in three parts; they are:

  • Autograd in PyTorch
  • Using Autograd for Polynomial Regression
  • Using Autograd to Solve a Math Puzzle

Autograd in PyTorch

In PyTorch, you can create tensors as variables or constants and build an expression with them. The expression is essentially a function of the variable tensors. Therefore, you may derive its derivative function, i.e., the differentiation or the gradient. This is the foundation of the training loop in a deep learning model. PyTorch comes with this feature at its core.

It is easier to explain autograd with an example. In PyTorch, you can create a constant matrix as follows:


import torch
 
x = torch.tensor([1, 2, 3])
print(x)
print(x.shape)
print(x.dtype)

The above prints:


tensor([1, 2, 3])
torch.Size([3])
torch.int64

This creates an integer vector (in the form of a PyTorch tensor). This vector can work like a NumPy vector in most cases. For example, you can do x+x or 2*x, and the result is just what you would expect. PyTorch comes with many functions for array manipulation that match NumPy, such as torch.transpose or torch.concatenate.

But this tensor is not assumed to be a variable for a function in the sense that differentiation with it is not supported. We can create tensors that works like a variable with an extra option:


import torch
 
x = torch.tensor([1., 2., 3.], requires_grad=True)
print(x)
print(x.shape)
print(x.dtype)

This will print:

tensor([1., 2., 3.], requires_grad=True)
torch.Size([3])
torch.float32

Note that, in the above, we created a tensor of floating point values. It is required because differentiation requires floating points not integers.

The operations (such as x+x and 2*x) can still be applied, but in this case, the tensor will remember how it got its values. We can demonstrate this feature in the following:


import torch
 
x = torch.tensor(3.6, requires_grad=True)
y = x * x
y.backward()
print(x.grad)

This prints:

tensor(7.2000)

What it does is the following: This defined a variable x (with value 3.6) and then compute y=x*x or �=�2. Then we ask for the differentiation of �. Since � obtained its value from �, we can find the derivative ���� at x.grad, in the form of a tensor, immediately after we run y.backward(). You know �=�2 means �′=2�. Hence the output would give you a value of 3.6×2=7.2.

Using Autograd for Polynomial Regression

How is this feature in PyTorch helpful? Let’s consider a case where you have a polynomial in the form of �=�(�), and you are given several (�,�) samples. How can you recover the polynomial �(�)? One way is to assume a random coefficient for the polynomial and feed in the samples (�,�). If the polynomial is found, you should see the value of � matches �(�). The closer they are, the closer your estimate is to the correct polynomial.

This is indeed a numerical optimization problem such that you want to minimize the difference between � and �(�). You can use gradient descent to solve it.

Let’s consider an example. You can build a polynomial �(�)=�2+2�+3 in NumPy as follows:


import numpy as np
 
polynomial = np.poly1d([1, 2, 3])
print(polynomial)

This prints:

   2
1 x + 2 x + 3

You may use the polynomial as a function, such as:

print(polynomial(1.5))

And this prints 8.25, for (1.5)2+2×(1.5)+3=8.25.

Now you can generate a number of samples from this function using NumPy:


N = 20   # number of samples
 
# Generate random samples roughly between -10 to +10
X = np.random.randn(N,1) * 5
Y = polynomial(X)

In the above, both X and Y are NumPy arrays of the shape (20,1), and they are related as �=�(�) for the polynomial �(�).

Now, assume you do not know what the polynomial is, except it is quadratic. And you want to recover the coefficients. Since a quadratic polynomial is in the form of ��2+��+�, you have three unknowns to find. You can find them using the gradient descent algorithm you implement or an existing gradient descent optimizer. The following demonstrates how it works:


import torch
 
# Assume samples X and Y are prepared elsewhere
 
XX = np.hstack([X*X, X, np.ones_like(X)])
 
w = torch.randn(3, 1, requires_grad=True)  # the 3 coefficients
x = torch.tensor(XX, dtype=torch.float32)  # input sample
y = torch.tensor(Y, dtype=torch.float32)   # output sample
optimizer = torch.optim.NAdam([w], lr=0.01)
print(w)
 
for _ in range(1000):
    optimizer.zero_grad()
    y_pred = x @ w
    mse = torch.mean(torch.square(y - y_pred))
    mse.backward()
    optimizer.step()
 
print(w)

The print statement before the for loop gives three random numbers, such as:


tensor([[1.3827],
        [0.8629],
        [0.2357]], requires_grad=True)

But the one after the for loop gives you the coefficients very close to that in the polynomial:


tensor([[1.0004],
        [1.9924],
        [2.9159]], requires_grad=True)

What the above code does is the following: First, it creates a variable vector w of 3 values, namely the coefficients �,�,�. Then you create an array of shape (�,3), in which � is the number of samples in the array X. This array has 3 columns, which are the values of �2, �, and 1, respectively. Such an array is built from the vector X using the  np.hstack() function. Similarly, we build the TensorFlow constant y from the NumPy array Y.

Afterward, you use a for loop to run gradient descent in 1,000 iterations. In each iteration, you compute �� in matrix form to find ��2+��+� and assign it to the variable y_pred. Then, compare y and y_pred and find the mean square error. Next, derive the gradient, i.e., the rate of change of the mean square error with respect to the coefficients w, using the backward() function. And based on this gradient, you use gradient descent to update w via the optimizer.

In essence, the above code is to find the coefficients w that minimizes the mean square error.

Putting everything together, the following is the complete code:


import numpy as np
import torch
 
polynomial = np.poly1d([1, 2, 3])
N = 20   # number of samples
 
# Generate random samples roughly between -10 to +10
X = np.random.randn(N,1) * 5
Y = polynomial(X)
 
# Prepare input as an array of shape (N,3)
XX = np.hstack([X*X, X, np.ones_like(X)])
 
# Prepare tensors
w = torch.randn(3, 1, requires_grad=True)  # the 3 coefficients
x = torch.tensor(XX, dtype=torch.float32)  # input sample
y = torch.tensor(Y, dtype=torch.float32)   # output sample
optimizer = torch.optim.NAdam([w], lr=0.01)
print(w)
 
# Run optimizer
for _ in range(1000):
    optimizer.zero_grad()
    y_pred = x @ w
    mse = torch.mean(torch.square(y - y_pred))
    mse.backward()
    optimizer.step()
 
print(w)

Using Autograd to Solve a Math Puzzle

In the above, 20 samples were used, which is more than enough to fit a quadratic equation. You may use gradient descent to solve some math puzzles as well. For example, the following problem:

[ A ]  +  [ B ]  =  9
  +         -
[ C ]  -  [ D ]  =  1
  =         =
  8         2

In other words,  to find the values of �,�,�,� such that:

�+�=9�–�=1�+�=8�–�=2

This can also be solved using autograd, as follows:


import random
import torch
 
A = torch.tensor(random.random(), requires_grad=True)
B = torch.tensor(random.random(), requires_grad=True)
C = torch.tensor(random.random(), requires_grad=True)
D = torch.tensor(random.random(), requires_grad=True)
 
# Gradient descent loop
EPOCHS = 2000
optimizer = torch.optim.NAdam([A, B, C, D], lr=0.01)
for _ in range(EPOCHS):
    y1 = A + B - 9
    y2 = C - D - 1
    y3 = A + C - 8
    y4 = B - D - 2
    sqerr = y1*y1 + y2*y2 + y3*y3 + y4*y4
    optimizer.zero_grad()
    sqerr.backward()
    optimizer.step()
 
print(A)
print(B)
print(C)
print(D)

There can be multiple solutions to this problem. One solution is the following:


tensor(4.7191, requires_grad=True)
tensor(4.2808, requires_grad=True)
tensor(3.2808, requires_grad=True)
tensor(2.2808, requires_grad=True)

Which means �=4.72, �=4.28, �=3.28, and �=2.28. You can verify this solution fits the problem.

The above code defines the four unknowns as variables with a random initial value. Then you compute the result of the four equations and compare it to the expected answer. You then sum up the squared error and ask PyTorch’s optimizer to minimize it. The minimum possible square error is zero, attained when our solution exactly fits the problem.

Note the way PyTorch produces the gradient: You ask the gradient of sqerr, which it noticed that among other things, only A, B, C, and D are its dependencies that requires_grad=True. Hence four gradients are found. You then apply each gradient to the respective variables in each iteration via the optimizer.

Further Reading

This section provides more resources on the topic if you are looking to go deeper.

Articles:

Summary

In this post, we demonstrated how PyTorch’s automatic differentiation works. This is the building block for carrying out deep learning training. Specifically, you learned:

  • What is automatic differentiation in PyTorch
  • How you can use gradient tape to carry out automatic differentiation
  • How you can use automatic differentiation to solve an optimization problem

Original article sourced at: https://machinelearningmastery.com

#pytorch