1684394352
In this video we'll see how to create our own Machine Learning library, like Keras, from scratch in Python. The goal is to be able to create various neural network architectures in a lego-fashion way. We'll see how we should architecture the code so that we can create one class per layer. We will go through the mathematics of every layer that we implement, namely the Dense or Fully Connected layer, and the Activation layer.
Chapters:
00:00 Intro
01:09 The plan
01:56 ML Reminder
02:51 Implementation Design
06:40 Base Layer Code
07:55 Dense Layer Forward
10:42 Dense Layer Backward Plan
11:23 Dense Layer Weights Gradient
14:59 Dense Layer Bias Gradient
16:28 Dense Layer Input Gradient
18:22 Dense Layer Code
19:43 Activation Layer Forward
20:46 Activation Layer Input Gradient
22:30 Hyperbolic Tangent
23:24 Mean Squared Error
26:05 XOR Intro
27:04 Linear Separability
27:45 XOR Code
30:32 XOR Decision Boundary
Corrections:
17:46 Bottom row of W^t should be w1i, w2i, ..., wji
18:58 dE/dX should be computed before updating weights and biases
Animation framework from @3Blue1Brown : https://github.com/3b1b/manim
😺 GitHub: https://github.com/TheIndependentCode/Neural-Network
Subscribe: https://www.youtube.com/@independentcode/featured
1684393959
In this video we'll create a Convolutional Neural Network (or CNN), from scratch in Python. We'll go fully through the mathematics of that layer and then implement it. We'll also implement the Reshape Layer, the Binary Cross Entropy Loss, and the Sigmoid Activation. Finally, we'll use all these objects to make a neural network capable of classifying hand written digits from the MNIST dataset.
Chapters:
00:00 Intro
00:33 Video Content
01:26 Convolution & Correlation
03:24 Valid Correlation
03:43 Full Correlation
04:35 Convolutional Layer - Forward
13:04 Convolutional Layer - Backward Overview
13:53 Convolutional Layer - Backward Kernel
18:14 Convolutional Layer - Backward Bias
20:06 Convolutional Layer - Backward Input
27:27 Reshape Layer
27:54 Binary Cross Entropy Loss
29:50 Sigmoid Activation
30:37 MNIST
Corrections:
23:45 The sum should go from 1 to *d*
Animation framework from @3Blue1Brown: https://github.com/3b1b/manim
😺 GitHub: https://github.com/TheIndependentCode/Neural-Network
Subscribe: https://www.youtube.com/@independentcode/featured
1681384274
This is a collection of (mostly) pen-and-paper exercises in machine learning. Each exercise comes with a detailed solution. The following topics are covered:
A compiled pdf is available on arXiv.
Please use the following reference for citations:
@TechReport{Gutmann2022a,
author = {Michael U. Gutmann},
title = {Pen and Paper Exercises in Machine Learning},
institution = {University of Edinburgh},
year = {2022},
arxiv = {https://arxiv.org/abs/2206.13446},
url = {https://github.com/michaelgutmann/ml-pen-and-paper-exercises},
}
The work is licensed under a Creative Commons Attribution 4.0 International License.
Under linux, you can compile the collection with make
. To remove temporary files, use make clean
.
By default, the compiled document includes the solutions for the exercises. To compile a document without the solutions, comment \SOLtrue
and uncomment \SOLfalse
in main.tex
.
Please use GitHub's issues to report mistakes or typos. I would welcome community contributions. The main idea is to provide exercises together with detailed solutions. Please get in touch to discuss options. My contact information is available here.
The tikz settings are based on macros kindly shared by David Barber. The macros were partly used for his book Bayesian Reasoning and Machine Learning. I make use of the ethuebung
package developed by Philippe Faist. I hacked the style file to support multiple chapters and inclusion of the exercises in a table of contents. I developed parts of the linear algebra and optimisation exercises for the course Unsupervised Machine Learning at the University of Helsinki and the remaining exercises for the course Probabilistic Modelling and Reasoning at the University of Edinburgh.
Author: Michaelgutmann
Source Code: https://github.com/michaelgutmann/ml-pen-and-paper-exercises
1676875747
This organic chemistry video tutorial discusses the factors that affect the stability of negative charges such as Atomic Size, Electronegativity, Resonance Stability & Electron Delocalization, The Inductive Effect, Solvating Effects, Aromatics, and Hybridization. These topics that are important in acids and bases.
A negative charge is an electrical property of a particle at the subatomic scale. An object is negatively charged if it has an excess of electrons, and is uncharged or positively charged otherwise. Such electrochemical activity plays a vital role in corrosion and its prevention.
Subscribe: https://www.youtube.com/@TheOrganicChemistryTutor/featured
1675753057
In this tutorial , We'll learn how to solve differential equations with two methods
In Mathematics, a differential equation is an equation that contains one or more functions with its derivatives. The derivatives of the function define the rate of change of a function at a point. It is mainly used in fields such as physics, engineering, biology and so on. The primary purpose of the differential equation is the study of solutions that satisfy the equations and the properties of the solutions. Learn how to solve differential equations here.
Subscribe: https://www.youtube.com/@SyberMath/featured
1674786631
Learn how to implement all the Algebra concepts using the Python programming language.
Learn college Algebra from an experienced university mathematics professor. You will also learn how to implement all the Algebra concepts using the Python programming language.
⭐️ Contents ⭐️
⌨️ (00:00:00) Introduction
⌨️ (00:14:02) Ratios, Proportions, and conversions
⌨️ (00:32:22) Basic Algebra, solving equations (one variable)
⌨️ (01:07:44) Percents, Decimals, and Fractions
⌨️ (01:40:33) Math function definition, using two variables (x,y)
⌨️ (02:17:13) Slope and intercept on a graph
⌨️ (03:28:53) Factoring, finding common factors and factoring square roots
⌨️ (05:05:40) Graphing systems of equations
⌨️ (05:36:09) Solving systems of two equations
⌨️ (06:06:17) Applications of linear systems
⌨️ (07:30:29) Quadratic equations
⌨️ (09:34:44) Polynomial Graphs
⌨️ (10:19:10) Cost, Revenue, and Profit equations
⌨️ (11:05:19) Simple and compound interest formulas
⌨️ (12:15:27) Exponents and logarithms
⌨️ (14:19:13) Spreadsheets and Additional Resources
⌨️ (15:06:10) Conclusion
💻 Syllabus & Code: https://github.com/edatfreecodecamp/python-math/blob/main/Algebra-with-Python/Algebra-Read-Me-Course-Outline.md
#python #algebra #maths #mathematics #developer #computerscience #softwaredeveloper
1673665560
A collection of resources to learn and review mathematics for machine learning.
📖 Books
by Jean Gallier and Jocelyn Quaintance
Includes mathematical concepts for machine learning and computer science.
Book: https://www.cis.upenn.edu/~jean/math-deep.pdf
by Ian Goodfellow and Yoshua Bengio and Aaron Courville
This includes the math basics for deep learning from the Deep Learning book.
Chapter: https://www.deeplearningbook.org/contents/part_basics.html
by Marc Peter Deisenroth, A. Aldo Faisal, and Cheng Soon Ong
This is probably the place you want to start. Start slowly and work on some examples. Pay close attention to the notation and get comfortable with it.
Book: https://mml-book.github.io
by Kevin Patrick Murphy
This book contains a comprehensive overview of classical machine learning methods and the principles explaining them.
Book: https://probml.github.io/pml-book/book1.html
This reference contains some mathematical concepts to help build a better understanding of deep learning.
Chapter: https://d2l.ai/chapter_appendix-mathematics-for-deep-learning/index.html
by Alicia A. Johnson, Miles Q. Ott, Mine Dogucu
Great online book covering Bayesian approaches.
Book: https://www.bayesrulesbook.com/index.html
📄 Papers
by Terence Parr & Jeremy Howard
In deep learning, you need to understand a bunch of fundamental matrix operations. If you want to dive deep into the math of matrix calculus this is your guide.
Paper: https://arxiv.org/abs/1802.01528
An article summarising the importance of mathematics in deep learning research and how it’s helping to advance the field.
Paper: https://arxiv.org/pdf/2203.08890.pdf
🎥 Video Lectures
by Dr. Sam Cooper & Dr. David Dye
Backpropagation is a key algorithm for training deep neural nets that rely on Calculus. Get familiar with concepts like chain rule, Jacobian, gradient descent.
Video Playlist: https://www.youtube.com/playlist?list=PLiiljHvN6z193BBzS0Ln8NnqQmzimTW23
by Dr. Sam Cooper & Dr. David Dye
A great companion to the previous video lectures. Neural networks perform transformations on data and you need linear algebra to get better intuitions of how that is done.
Video Playlist: https://www.youtube.com/playlist?list=PLiiljHvN6z1_o1ztXTKWPrShrMrBLo5P3
by Anand Avati
Lectures containing mathematical explanations to many concepts in machine learning.
Course: https://www.youtube.com/playlist?list=PLoROMvodv4rNH7qL6-efu_q2_bPuy0adh
🧮 Math Basics
by Jerome H. Friedman, Robert Tibshirani, and Trevor Hastie
Machine learning deals with data and in turn uncertainty which is what statistics aims to teach. Get comfortable with topics like estimators, statistical significance, etc.
Book: https://hastie.su.domains/ElemStatLearn/
If you are interested in an introduction to statistical learning, then you might want to check out "An Introduction to Statistical Learning".
by E. T. Jaynes
In machine learning, we are interested in building probabilistic models and thus you will come across concepts from probability theory like conditional probability and different probability distributions.
Source: https://bayes.wustl.edu/etj/prob/book.pdf
by David J. C. MacKay
When you are applying machine learning you are dealing with information processing which in essence relies on ideas from information theory such as entropy and KL Divergence,...
Book: https://www.inference.org.uk/itprnn/book.html
by Khan Academy
A complete overview of statistics and probability required for machine learning.
Course: https://www.khanacademy.org/math/statistics-probability
Slides and video lectures on the popular linear algebra book Linear Algebra Done Right.
Lecture and Slides: https://linear.axler.net/LADRvideos.html
by Khan Academy
Vectors, matrices, operations on them, dot & cross product, matrix multiplication etc. is essential for the most basic understanding of ML maths.
Course: https://www.khanacademy.org/math/linear-algebra
by Khan Academy
Precalculus, Differential Calculus, Integral Calculus, Multivariate Calculus
Course: https://www.khanacademy.org/math/calculus-home
This collection is far from exhaustive but it should provide a good foundation to start learning some of the mathematical concepts used in machine learning. Reach out on Twitter if you have any questions.
Author: Dair-ai
Source Code: https://github.com/dair-ai/Mathematics-for-ML
1670228075
In this video We explain what dummy variables are and how you can easily create them online.
Categorical variables with two characteristics can be used as independent variables (predictors) in a Regression. Variables with two characteristics are also called dichotomous, e.g. gender with the characteristics male and female
Normally, only independent variables with two characteristics can be considered in a regression. If the variables have more characteristics, dummy variables must be formed. From a variable with n characteristics, n-1 new dummy variables with 2 characteristics each are created.
You will find more information here:
https://datatab.net/tutorial/regression
Subscribe: https://www.youtube.com/@datatab/featured
1669790472
The Fourier transform has a million applications across all sorts of fields in science and math. But one of the very deepest arises in quantum mechanics, where it provides a map between two parallel descriptions of a quantum particle: one in terms of the position space wavefunction, and a dual description in terms of the momentum space wavefunction. Understanding this connection is also one of the best ways of learning what the Fourier transform really means.
We'll start by thinking about the quantum mechanics of a particle on a circle, which requires that the wavefunction be periodic. That lets us expand it in a Fourier series---a superposition of many sine and cosine functions, or equivalently complex exponential functions. We'll see that these individual Fourier waves are the eigenfunctions of the quantum momentum operator, and the corresponding eigenvalues are the numbers we can get when we go to measure the momentum of the particle. The coefficients of the Fourier series tell us the probabilities of which value we'll get.
Then, by taking the limit where the radius of this circular space goes to infinity, we'll return to the quantum mechanics of a particle on an infinite line. And what we'll discover is that the full-fledged Fourier transform emerges directly from the Fourier series in this limit, and that gives us a powerful intuition for understanding what the Fourier transform means. We'll look at an example that shows that when the position space wavefunction is a narrow spike, so that we have a good idea of where the particle is in space, the momentum space wavefunction will be spread out across a huge range. By knowing the position of the particle precisely, we don't have a clue what the momentum will be, and vice-versa! This is the Heisenberg uncertainty principle in action.
0:00 Introduction
2:56 The Fourier series
16:08 The Fourier transform
25:37 An example
Develop a deep understanding of the Fourier transform by appreciating the critical role it plays in quantum mechanics! Get the notes for free here: https://courses.physicswithelliot.com/notes-sign-up
Sign up for my newsletter for additional physics lessons: https://www.physicswithelliot.com/sign-up
Subscribe: https://www.youtube.com/@PhysicswithElliot/featured
1667793578
Magnetic Field using Biot-Savart law: Circular Loop and Long Wire
The Biot-Savart law and uses the law to calculate the magnetic field produced by a current loop. In the second example i show how to evaluate the field produced by a long wire. For the long wire i show how to solve using trigonometric substitution and also using integral tables.
Biot-Savart’s law is an equation that gives the magnetic field produced due to a current carrying segment. This segment is taken as a vector quantity known as the current element.
Subscribe: https://www.youtube.com/c/OnlinePhysicsNinja/featured
1667470987
It is one of the oldest and most simple rules in Mathematics. Mathematics is a subject that calls for a clear set of instructions, usage of formulas, and order of operation. When we're given a mathematical question containing multiple sign, the best way to simplify it is through the use of the basic primary BODMAS rule.
https://www.pw.live/full-form/bodmas-full-form
#mathematics #elearning #onlinecourse #maths #calculating
1667297780
The value of Pi (π) is defined as the ratio of the circumference of a circle to its diameter and equal to 3.14159 approximately
The symbol for Pi is denoted π and pronounced "pie." It is the 16 letters of the Greek alphabet and is used to represent a mathematical constant.
https://www.pw.live/math-articles/value-of-pi
#mathematics
#valueofpinetwork #valueofpiinfraction
#valueofpiindegrees
#fullvalueofpi
#valueofpiindecimal
#valueofpi22/7
1666434300
Dendriform dialgebra algorithms to compute using Loday's arithmetic on groves of planar binary trees
Installation of latest release version using Julia:
julia> Pkg.add("Dendriform")
Provides the types PBTree
for planar binary trees, Grove
for tree collections of constant degree, and GroveBin
to compress grove data. This package defines various essential operations on planar binary trees and groves like ∪
for union
; ∨
for graft
; left
and right
for branching; ⋖
, ⋗
, <
, >
, ≤
, ≥
for Tamari's partial ordering; ⊴
for between
; /
and \
(i.e. over
and under
); and the dashv
and vdash
operations ⊣
, ⊢
, +
, *
for dendriform algebra.
View the documentation stable / latest for more features and examples.
We call the name of a tree to represent it as a vector, where the sequence is made up of n integers. Collections of planar binary trees are encoded into an equivalence class of matrices:
where if there exists a permutation
so that
. The binary tree grafting operation is computed
The left and right addition are computed on the following recursive principle:
Together these non-commutative binary operations satisfy the properties of an associative dendriform dialgebra. The structures induced by Loday's definition of the sum have the partial ordering of the associahedron known as Tamari lattice.
However, in this computational package, a stricter total ordering is constructed using a function that transforms the set-vector isomorphism obtained from the descending greatest integer index position search method:
The structure obtained from this total ordering is used to construct a reliable binary groveindex
representation that encodes the essential data of any grove, using the formula
These algorithms are used in order to facilitate computations that provide insight into the Loday arithmetic.
Basic usage examples:
julia> using Dendriform
julia> Grove(3,7) ⊣ [1,2]∪[2,1]
[1,2,5,1,2]
[1,2,5,2,1]
[2,1,5,1,2]
[2,1,5,2,1]
[1,5,3,1,2]
[1,5,2,1,3]
[1,5,1,2,3]
[1,5,3,2,1]
[1,5,1,3,1]
Y5 #9/42
julia> Grove(2,3) * ([1,2,3]∪[3,2,1]) |> GroveBin
2981131286847743360614880957207748817969 Y6 #30/132 [54.75%]
julia> [2,1,7,4,1,3,1] < [2,1,7,4,3,2,1]
true
Author: Chakravala
Source Code: https://github.com/chakravala/Dendriform.jl
License: GPL-3.0 license
1665996137
The C++ integral arithmetic operations present a challenge in formal interface design. Their preconditions are nontrivial, their postconditions are exacting, and they are deeply interconnected by mathematical theorems. I will address this challenge, presenting interfaces, theorems, and proofs in a lightly extended C++.
This talk takes its title from Bertrand Russell’s and Alfred North Whitehead’s logicist tour de force, Principia Mathematica. It echoes that work in developing arithmetic from first principles, but starts from procedural first principles: stability of objects, substitutability of values, and repeatability of operations.
In sum, this talk is one part formal interface design, one part tour of C++ integral arithmetic, one part foundations of arithmetic, and one part writing mathematical proofs procedurally.
#cplusplus #cpp #programming #mathematics #math
1665462168
In this tutorial, explain the differences between the Riemann integral and the Lebesgue integral in a demonstrative way.
In the branch of mathematics known as real analysis, the Riemann integral, created by Bernhard Riemann, was the first rigorous definition of the integral of a function on an interval. It was presented to the faculty at the University of Göttingen in 1854, but not published in a journal until 1868. For many functions and practical applications, the Riemann integral can be evaluated by the fundamental theorem of calculus or approximated by numerical integration. - wikipedia-
Lebesgue integration is an alternative way of defining the integral in terms of measure theory that is used to integrate a much broader class of functions than the Riemann integral or even the Riemann-Stieltjes integral. The idea behind the Lebesgue integral is that instead of approximating the total area by dividing it into vertical strips, one approximates the total area by dividing it into horizontal strips. This corresponds to asking "for each yy-value, how many xx-values produce this value?" as opposed to asking "for each xx-value, what yy-value does it produce?"
(This explanation fits to lectures for students in their first year of study: Mathematics for physicists, Mathematics for the natural science, Mathematics for engineers and so on)
I hope that this helps students, pupils and others.
0:00 Introduction
0:30 Riemann integral
2:00 Problems of Riemann integral
7:50 Riemann integral definition
9:13 Lebesgue integral - idea
Subscribe: https://www.youtube.com/c/brightsideofmaths/featured