1608676260
教育系エンジニアのやっすんが、linear-gradient について解説します！
#developer #aws
1603033200
What is supervised learning? In supervised learning, you have an input-output pair. And you will try to map the given input to output by training the input-output pair.
Another type of machine learning algorithm is unsupervised learning, in this, you don’t have an output variable. You will try to group the input variables by their similarities.
What is Regression? Regression is a statistical process of estimating the relationship between a dependant variable and several independent variables.
In other words, it can be said that linear regression suggests that the output variable can be represented as a linear combination of the input variables.
Linear Regression Example
Depending upon the number of input variables, linear regression can be classified into simple and multiple linear regression. If the number of input variables is one, then it is called simple linear regression.
Simple Linear Regression Formula
If there is more than one input variable, then it is called multiple linear regression.
Multiple Linear Regression Formula
In this blog post, I will be discussing simple linear regression.
#linear-regression-python #statistics #machine-learning #gradient-descent #linear-regression
1627203420
Learn how to use the linear-gradient CSS function over the background property in order to set a linear gradient as the background image.
We also shortly discuss the repeating-linear-gradient CSS function, which creates an image consisting of repeating linear gradients, allowing for the creation of countless patterns.
Code for this Project: https://codepen.io/Coding_Journey/pen/RmazyV
Support the Channel 💙☕🙏
PayPal: https://paypal.me/CodingJourney
Affiliate Links*
Bluehost Web Hosting: https://www.bluehost.com/track/codingjourney/
*By making a purchase through any of my affiliate links, I’ll receive a small commission at no additional cost to you. This helps support the channel and allows me to continue creating videos like this. Thank you for your support!
Suggested Videos:
Card Flip Effect (HTML & CSS): https://www.youtube.com/watch?v=Lc6wyl1KdOc
Typing Effect with HTML, CSS and JavaScript: https://www.youtube.com/watch?v=T4VE_6v9hFs
Arrow Swipe Game with HTML, CSS and JavaScript: https://www.youtube.com/watch?v=SogoaFv2CRQ&list=PLdGqEpyfYoMAYRa97MgWMlfJlbSJbxZXp
Thanks for watching! For any questions, suggestions or just to say hi, please use the comment section below!
Codepen: https://codepen.io/Coding_Journey/
Twitter: https://twitter.com/CodingJrney
Email: codingjourney123@gmail.com
Subscribe 💖
https://www.youtube.com/channel/UCwpH4liYtBSiVXSfL8x2TyQ?sub_confirmation=1
#css #linear-gradient
1594271340
Let’s begin our journey with the truth — machines never learn. What a typical machine learning algorithm does is find a mathematical equation that, when applied to a given set of training data, produces a prediction that is very close to the actual output.
Why is this not learning? Because if you change the training data or environment even slightly, the algorithm will go haywire! Not how learning works in humans. If you learned to play a video game by looking straight at the screen, you would still be a good player if the screen is slightly tilted by someone, which would not be the case in ML algorithms.
However, most of the algorithms are so complex and intimidating that it gives our mere human intelligence the feel of actual learning, effectively hiding the underlying math within. There goes a dictum that if you can implement the algorithm, you know the algorithm. This saying is lost in the dense jungle of libraries and inbuilt modules which programming languages provide, reducing us to regular programmers calling an API and strengthening further this notion of a black box. Our quest will be to unravel the mysteries of this so-called ‘black box’ which magically produces accurate predictions, detects objects, diagnoses diseases and claims to surpass human intelligence one day.
We will start with one of the not-so-complex and easy to visualize algorithm in the ML paradigm — Linear Regression. The article is divided into the following sections:
Need for Linear Regression
Visualizing Linear Regression
Deriving the formula for weight matrix W
Using the formula and performing linear regression on a real world data set
Note: Knowledge on Linear Algebra, a little bit of Calculus and Matrices are a prerequisite to understanding this article
Also, a basic understanding of python, NumPy, and Matplotlib are a must.
Regression means predicting a real valued number from a given set of input variables. Eg. Predicting temperature based on month of the year, humidity, altitude above sea level, etc. Linear Regression would therefore mean predicting a real valued number that follows a linear trend. Linear regression is the first line of attack to discover correlations in our data.
Now, the first thing that comes to our mind when we hear the word linear is, a line.
Yes! In linear regression, we try to fit a line that best generalizes all the data points in the data set. By generalizing, we mean we try to fit a line that passes very close to all the data points.
But how do we ensure that this happens? To understand this, let’s visualize a 1-D Linear Regression. This is also called as Simple Linear Regression
#calculus #machine-learning #linear-regression-math #linear-regression #linear-regression-python #python
1624447260
Because I am continuously endeavouring to improve my knowledge and skill of the Python programming language, I decided to take some free courses in an attempt to improve upon my knowledge base. I found one such course on linear algebra, which I found on YouTube. I decided to watch the video and undertake the course work because it focused on the Python programming language, something that I wanted to improve my skill in. Youtube video this course review was taken from:- (4) Python for linear algebra (for absolute beginners) — YouTube
The course is for absolute beginners, which is good because I have never studied linear algebra and had no idea what the terms I would be working with were.
Linear algebra is the branch of mathematics concerning linear equations, such as linear maps and their representations in vector spaces and through matrices. Linear algebra is central to almost all areas of mathematics.
Whilst studying linear algebra, I have learned a few topics that I had not previously known. For example:-
A scalar is simply a number, being an integer or a float. Scalers are convenient in applications that don’t need to be concerned with all the ways that data can be represented in a computer.
A vector is a one dimensional array of numbers. The difference between a vector is that it is mutable, being known as dynamic arrays.
A matrix is similar to a two dimensional rectangular array of data stored in rows and columns. The data stored in the matrix can be strings, numbers, etcetera.
In addition to the basic components of linear algebra, being a scalar, vector and matrix, there are several ways the vectors and matrix can be manipulated to make it suitable for machine learning.
I used Google Colab to code the programming examples and the assignments that were given in the 1 hour 51 minute video. It took a while to get into writing the code of the various subjects that were studied because, as the video stated, it is a course for absolute beginners.
The two main libraries that were used for this course were numpy and matplotlib. Numpy is the library that is used to carry out algebraic operations and matplotlib is used to graphically plot the points that are created in the program.
#numpy #matplotlib #python #linear-algebra #course review: python for linear algebra #linear algebra
1594890120
Gradient descent_ is an optimization algorithm used to minimize a cost function (i.e. Error) parameterized by a model._
We know that Gradient means the slope of a surface or a line. This algorithm involves calculations with slope.
To understand about Gradient Descent, we must know what is a cost function.
Cost function(J) of Linear Regression is the Root Mean Squared Error (RMSE) between predicted y value (predicted) and true y value (y).
Source: https://www.geeksforgeeks.org/ml-linear-regression/
For a Linear Regression model, our ultimate goal is to get the minimum value of a Cost Function.
To become familiar about Linear Regression, please go through this article.
First Let us visualize how the Gradient Descent looks like to understand better.
As Gradient Descent is an iterative algorithm, we will be fitting various lines to find the best fit line iteratively.
Each time we get an error value (SSE).
If we fit all the error values in a graph, it will become a parabola.
W
hat is the Relationship between Slope, Intercept and SSE? Why Gradient Descent is a Parabola?
To answer these 2 questions, Let’s see the below example.
Made in MS. Excel.
You can notice that I took random value of m = 0.45 & c = 0.75.
For this slope and intercept we have come up with a new Y predicted values for a regression line.
#gradient-descent #linear-regression #multiple-linearregression #cost-function