1684394352

## How to Create a Neural Network from Scratch | Mathematics & Python Code

In this video we'll see how to create our own Machine Learning library, like Keras, from scratch in Python. The goal is to be able to create various neural network architectures in a lego-fashion way. We'll see how we should architecture the code so that we can create one class per layer. We will go through the mathematics of every layer that we implement, namely the Dense or Fully Connected layer, and the Activation layer.

Chapters:
00:00 Intro
01:09 The plan
01:56 ML Reminder
02:51 Implementation Design
06:40 Base Layer Code
07:55 Dense Layer Forward
10:42 Dense Layer Backward Plan
18:22 Dense Layer Code
19:43 Activation Layer Forward
22:30 Hyperbolic Tangent
23:24 Mean Squared Error
26:05 XOR Intro
27:04 Linear Separability
27:45 XOR Code
30:32 XOR Decision Boundary

Corrections:
17:46 Bottom row of W^t should be w1i, w2i, ..., wji
18:58 dE/dX should be computed before updating weights and biases

Animation framework from @3Blue1Brown  : https://github.com/3b1b/manim

1684393959

## Create a Convolutional Neural Network from Scratch | Mathematics & Python Code

In this video we'll create a Convolutional Neural Network (or CNN), from scratch in Python. We'll go fully through the mathematics of that layer and then implement it. We'll also implement the Reshape Layer, the Binary Cross Entropy Loss, and the Sigmoid Activation. Finally, we'll use all these objects to make a neural network capable of classifying hand written digits from the MNIST dataset.

Chapters:
00:00 Intro
00:33 Video Content
01:26 Convolution & Correlation
03:24 Valid Correlation
03:43 Full Correlation
04:35 Convolutional Layer - Forward
13:04 Convolutional Layer - Backward Overview
13:53 Convolutional Layer - Backward Kernel
18:14 Convolutional Layer - Backward Bias
20:06 Convolutional Layer - Backward Input
27:27 Reshape Layer
27:54 Binary Cross Entropy Loss
29:50 Sigmoid Activation
30:37 MNIST

Corrections:
23:45 The sum should go from 1 to *d*

Animation framework from @3Blue1Brown: https://github.com/3b1b/manim

1681384274

## Pen and paper exercises in machine learning

This is a collection of (mostly) pen-and-paper exercises in machine learning. Each exercise comes with a detailed solution. The following topics are covered:

• linear algebra
• optimisation
• directed graphical models
• undirected graphical models
• expressive power of graphical models
• factor graphs and message passing
• inference for hidden Markov models
• model-based learning (including ICA and unnormalised models)
• sampling and Monte-Carlo integration
• variational inference

A compiled pdf is available on arXiv.

Please use the following reference for citations:

@TechReport{Gutmann2022a,
author      = {Michael U. Gutmann},
title       = {Pen and Paper Exercises in Machine Learning},
institution = {University of Edinburgh},
year        = {2022},
arxiv       = {https://arxiv.org/abs/2206.13446},
url         = {https://github.com/michaelgutmann/ml-pen-and-paper-exercises},
}


## Usage

Under linux, you can compile the collection with make. To remove temporary files, use make clean.

By default, the compiled document includes the solutions for the exercises. To compile a document without the solutions, comment \SOLtrue and uncomment \SOLfalse in main.tex.

## Contributing

Please use GitHub's issues to report mistakes or typos. I would welcome community contributions. The main idea is to provide exercises together with detailed solutions. Please get in touch to discuss options. My contact information is available here.

## Acknowledgements

The tikz settings are based on macros kindly shared by David Barber. The macros were partly used for his book Bayesian Reasoning and Machine Learning. I make use of the ethuebung package developed by Philippe Faist. I hacked the style file to support multiple chapters and inclusion of the exercises in a table of contents. I developed parts of the linear algebra and optimisation exercises for the course Unsupervised Machine Learning at the University of Helsinki and the remaining exercises for the course Probabilistic Modelling and Reasoning at the University of Edinburgh.

Author: Michaelgutmann
Source Code: https://github.com/michaelgutmann/ml-pen-and-paper-exercises

1676875747

## Stability of Negative Charges Acids and Bases

This organic chemistry video tutorial discusses the factors that affect the stability of negative charges such as Atomic Size, Electronegativity, Resonance Stability & Electron Delocalization, The Inductive Effect, Solvating Effects, Aromatics, and Hybridization.  These topics that are important in acids and bases.

A negative charge is an electrical property of a particle at the subatomic scale. An object is negatively charged if it has an excess of electrons, and is uncharged or positively charged otherwise. Such electrochemical activity plays a vital role in corrosion and its prevention.

1675753057

## How to Solve Differential Equations with Two Methods

In this tutorial , We'll learn how to solve differential equations with two methods

In Mathematics, a differential equation is an equation that contains one or more functions with its derivatives. The derivatives of the function define the rate of change of a function at a point. It is mainly used in fields such as physics, engineering, biology and so on. The primary purpose of the differential equation is the study of solutions that satisfy the equations and the properties of the solutions. Learn how to solve differential equations here.

1674786631

## College Algebra – Full Course with Python Code

Learn college Algebra from an experienced university mathematics professor. You will also learn how to implement all the Algebra concepts using the Python programming language.

⭐️ Contents ⭐️
⌨️ (00:00:00) Introduction
⌨️ (00:14:02) Ratios, Proportions, and conversions
⌨️ (00:32:22) Basic Algebra, solving equations (one variable)
⌨️ (01:07:44) Percents, Decimals, and Fractions
⌨️ (01:40:33) Math function definition, using two variables (x,y)
⌨️ (02:17:13) Slope and intercept on a graph
⌨️ (03:28:53) Factoring, finding common factors and factoring square roots
⌨️ (05:05:40) Graphing systems of equations
⌨️ (05:36:09) Solving systems of two equations
⌨️ (06:06:17) Applications of linear systems
⌨️ (09:34:44) Polynomial Graphs
⌨️ (10:19:10) Cost, Revenue, and Profit equations
⌨️ (11:05:19) Simple and compound interest formulas
⌨️ (12:15:27) Exponents and logarithms
⌨️ (15:06:10) Conclusion

#python #algebra #maths #mathematics #developer #computerscience #softwaredeveloper

1673665560

## Mathematics for Machine Learning

A collection of resources to learn and review mathematics for machine learning.

📖 Books

### Algebra, Topology, Differential Calculus, and Optimization Theory For Computer Science and Machine Learning

by Jean Gallier and Jocelyn Quaintance

Includes mathematical concepts for machine learning and computer science.

### Applied Math and Machine Learning Basics

by Ian Goodfellow and Yoshua Bengio and Aaron Courville

This includes the math basics for deep learning from the Deep Learning book.

### Mathematics for Machine Learning

by Marc Peter Deisenroth, A. Aldo Faisal, and Cheng Soon Ong

This is probably the place you want to start. Start slowly and work on some examples. Pay close attention to the notation and get comfortable with it.

### Probabilistic Machine Learning: An Introduction

by Kevin Patrick Murphy

This book contains a comprehensive overview of classical machine learning methods and the principles explaining them.

### Mathematics for Deep Learning

This reference contains some mathematical concepts to help build a better understanding of deep learning.

### Bayes Rules! An Introduction to Applied Bayesian Modeling

by Alicia A. Johnson, Miles Q. Ott, Mine Dogucu

Great online book covering Bayesian approaches.

📄 Papers

### The Matrix Calculus You Need For Deep Learning

by Terence Parr & Jeremy Howard

In deep learning, you need to understand a bunch of fundamental matrix operations. If you want to dive deep into the math of matrix calculus this is your guide.

### The Mathematics of AI

An article summarising the importance of mathematics in deep learning research and how it’s helping to advance the field.

🎥 Video Lectures

### Multivariate Calculus by Imperial College London

by Dr. Sam Cooper & Dr. David Dye

Backpropagation is a key algorithm for training deep neural nets that rely on Calculus. Get familiar with concepts like chain rule, Jacobian, gradient descent.

### Mathematics for Machine Learning - Linear Algebra

by Dr. Sam Cooper & Dr. David Dye

A great companion to the previous video lectures. Neural networks perform transformations on data and you need linear algebra to get better intuitions of how that is done.

### CS229: Machine Learning

by Anand Avati

Lectures containing mathematical explanations to many concepts in machine learning.

🧮 Math Basics

### The Elements of Statistical Learning

by Jerome H. Friedman, Robert Tibshirani, and Trevor Hastie

Machine learning deals with data and in turn uncertainty which is what statistics aims to teach. Get comfortable with topics like estimators, statistical significance, etc.

If you are interested in an introduction to statistical learning, then you might want to check out "An Introduction to Statistical Learning".

### Probability Theory: The Logic of Science

by E. T. Jaynes

In machine learning, we are interested in building probabilistic models and thus you will come across concepts from probability theory like conditional probability and different probability distributions.

### Information Theory, Inference and Learning Algorithms

by David J. C. MacKay

When you are applying machine learning you are dealing with information processing which in essence relies on ideas from information theory such as entropy and KL Divergence,...

### Statistics and probability

A complete overview of statistics and probability required for machine learning.

### Linear Algebra Done Right

Slides and video lectures on the popular linear algebra book Linear Algebra Done Right.

### Linear Algebra

Vectors, matrices, operations on them, dot & cross product, matrix multiplication etc. is essential for the most basic understanding of ML maths.

## Calculus

Precalculus, Differential Calculus, Integral Calculus, Multivariate Calculus

This collection is far from exhaustive but it should provide a good foundation to start learning some of the mathematical concepts used in machine learning. Reach out on Twitter if you have any questions.

Author: Dair-ai
Source Code: https://github.com/dair-ai/Mathematics-for-ML

1670228075

## What is a Dummy Variable ? Dummy Variables in Multiple Regression

In this video We explain what dummy variables are and how you can easily create them online.

Categorical variables with two characteristics can be used as independent variables (predictors) in a Regression. Variables  with two characteristics are also called dichotomous, e.g. gender with the characteristics male and female

Normally, only independent variables with two characteristics can be considered in a regression. If the variables have more characteristics, dummy variables must be formed. From a variable with n characteristics, n-1 new dummy variables with 2 characteristics each are created.

https://datatab.net/tutorial/regression

1669790472

## Understand the Fourier Transform : Start From Quantum Mechanics

The Fourier transform has a million applications across all sorts of fields in science and math. But one of the very deepest arises in quantum mechanics, where it provides a map between two parallel descriptions of a quantum particle: one in terms of the position space wavefunction, and a dual description in terms of the momentum space wavefunction. Understanding this connection is also one of the best ways of learning what the Fourier transform really means.

We'll start by thinking about the quantum mechanics of a particle on a circle, which requires that the wavefunction be periodic. That lets us expand it in a Fourier series---a superposition of many sine and cosine functions, or equivalently complex exponential functions. We'll see that these individual Fourier waves are the eigenfunctions of the quantum momentum operator, and the corresponding eigenvalues are the numbers we can get when we go to measure the momentum of the particle. The coefficients of the Fourier series tell us the probabilities of which value we'll get.

Then, by taking the limit where the radius of this circular space goes to infinity, we'll return to the quantum mechanics of a particle on an infinite line. And what we'll discover is that the full-fledged Fourier transform emerges directly from the Fourier series in this limit, and that gives us a powerful intuition for understanding what the Fourier transform means. We'll look at an example that shows that when the position space wavefunction is a narrow spike, so that we have a good idea of where the particle is in space, the momentum space wavefunction will be spread out across a huge range. By knowing the position of the particle precisely, we don't have a clue what the momentum will be, and vice-versa! This is the Heisenberg uncertainty principle in action.

0:00 Introduction
2:56 The Fourier series
16:08 The Fourier transform
25:37 An example

Develop a deep understanding of the Fourier transform by appreciating the critical role it plays in quantum mechanics! Get the notes for free here: https://courses.physicswithelliot.com/notes-sign-up

1667793578

## The Biot-Savart Law | Magnetic Field using Biot-Savart law

Magnetic Field using Biot-Savart law: Circular Loop and Long Wire

The Biot-Savart law and uses the law to calculate the magnetic field produced by a current loop. In the second example i show how to evaluate the field produced by a long wire. For the long wire i show how to solve using trigonometric substitution and also using integral tables.

Biot-Savart’s law is an equation that gives the magnetic field produced due to a current carrying segment. This segment is taken as a vector quantity known as the current element.

1667470987

## What is The Full Form of BODMAS ? | Physics Wallah

It is one of the oldest and most simple rules in Mathematics. Mathematics is a subject that calls for a clear set of instructions, usage of formulas, and order of operation. When we're given a mathematical question containing multiple sign, the best way to simplify it is through the use of the basic primary BODMAS rule.

https://www.pw.live/full-form/bodmas-full-form

1667297780

## IMPORTANT FACTS ABOUT PI (π)

The value of Pi (π) is defined as the ratio of the circumference of a circle to its diameter and equal to 3.14159 approximately
The symbol for Pi is denoted π and pronounced "pie." It is the 16 letters of the Greek alphabet and is used to represent a mathematical constant.

https://www.pw.live/math-articles/value-of-pi

#mathematics
#valueofpinetwork #valueofpiinfraction
#valueofpiindegrees
#fullvalueofpi
#valueofpiindecimal
#valueofpi22/7

1666434300

## Dendriform.jl

Dendriform dialgebra algorithms to compute using Loday's arithmetic on groves of planar binary trees

## Setup

Installation of latest release version using Julia:

julia> Pkg.add("Dendriform")


Provides the types PBTree for planar binary trees, Grove for tree collections of constant degree, and GroveBin to compress grove data. This package defines various essential operations on planar binary trees and groves like ∪ for union; ∨ for graft; left and right for branching; ⋖, ⋗, <, >, ≤, ≥ for Tamari's partial ordering; ⊴ for between; / and \ (i.e. over and under); and the dashv and vdash operations ⊣, ⊢, +, * for dendriform algebra.

View the documentation stable / latest for more features and examples.

## Background

We call $\omega(\tau)&space;:=&space;[\omega(\tau^l),n,&space;\omega(\tau^r)]&space;=&space;[d_1,d_2,\dots,d_n]$ the name of a tree to represent it as a vector, where the sequence is made up of n integers. Collections of planar binary trees are encoded into an equivalence class of matrices:

$\mathbb{Y}_n^m&space;\cong&space;\Lambda_n^m&space;=&space;\left\{A&space;\in&space;\text{Mat}_{m\times&space;n}(\mathbb{Z}^+)&space;:&space;\forall&space;i(\exists!\tau\in\mathbb{Y}_n^1)&space;(A_{i,*}&space;=&space;\omega(\tau)),&space;\forall&space;i,j(A_{i,*}&space;\neq&space;A_{j,*})&space;\right\}&space;/&space;\sim$

where $A&space;\sim&space;B$ if there exists a permutation $f\in&space;S_k$ so that $\forall&space;i(&space;A_{i,*}&space;=&space;B_{f(i),*})$. The binary tree grafting operation is computed

$\omega(\alpha\vee&space;\beta)&space;=&space;\omega(\alpha)\vee\omega(\beta)&space;:=&space;[\omega(\alpha),a+1+b,\omega(\beta)]\in&space;\Lambda_{a+b+1}^1$

The left and right addition are computed on the following recursive principle:

$\xi\dashv&space;\eta&space;&=&space;\bigcup_{i}&space;\bigcup_{\tau&space;\in&space;\xi_i^r&space;+&space;\eta}&space;\xi_i^l&space;\vee&space;\tau&space;\qquad&space;&\text{and}&space;\qquad&space;\qquad&space;\xi\vdash&space;\eta&space;&=&space;\bigcup_{j}&space;\bigcup_{\tau&space;\in&space;\xi+\eta_j^l}&space;\tau\vee&space;\eta_j^r.$

Together these non-commutative binary operations satisfy the properties of an associative dendriform dialgebra. The structures induced by Loday's definition of the sum have the partial ordering of the associahedron known as Tamari lattice.

• Figure: Tamari associahedron, colored to visualize noncommutative sums of [1,2] and [2,1], code: gist

However, in this computational package, a stricter total ordering is constructed using a function that transforms the set-vector isomorphism obtained from the descending greatest integer index position search method:

$\Theta(\mu)&space;&=&space;\sum_{j=n}^1&space;\sum_{k=1}^{\&hash;e_j}&space;(e_j)_k&space;\cdot&space;10^{\delta(j,k)},&space;\qquad&space;&\text{where}&space;\qquad&space;\delta(j,k)&space;&=&space;n&space;-&space;\sum_{r=1}^{j-1}&space;\sum_{s=1}^{\&hash;e_r}&space;1&space;-&space;\sum_{s=1}^{k}&space;1$

The structure obtained from this total ordering is used to construct a reliable binary groveindex representation that encodes the essential data of any grove, using the formula

$\zeta_\gamma&space;:=&space;\sum_{\tau&space;\in&space;\gamma}&space;2^{\theta_\tau&space;-&space;1}$

These algorithms are used in order to facilitate computations that provide insight into the Loday arithmetic.

## Usage

Basic usage examples:

julia> using Dendriform

julia> Grove(3,7) ⊣ [1,2]∪[2,1]
[1,2,5,1,2]
[1,2,5,2,1]
[2,1,5,1,2]
[2,1,5,2,1]
[1,5,3,1,2]
[1,5,2,1,3]
[1,5,1,2,3]
[1,5,3,2,1]
[1,5,1,3,1]
Y5 #9/42

julia> Grove(2,3) * ([1,2,3]∪[3,2,1]) |> GroveBin
2981131286847743360614880957207748817969 Y6 #30/132 [54.75%]

julia> [2,1,7,4,1,3,1] < [2,1,7,4,3,2,1]
true


## References

Author: Chakravala
Source Code: https://github.com/chakravala/Dendriform.jl

1665996137

## The Foundations of Arithmetic in C++

The C++ integral arithmetic operations present a challenge in formal interface design. Their preconditions are nontrivial, their postconditions are exacting, and they are deeply interconnected by mathematical theorems. I will address this challenge, presenting interfaces, theorems, and proofs in a lightly extended C++.

This talk takes its title from Bertrand Russell’s and Alfred North Whitehead’s logicist tour de force, Principia Mathematica. It echoes that work in developing arithmetic from first principles, but starts from procedural first principles: stability of objects, substitutability of values, and repeatability of operations.

In sum, this talk is one part formal interface design, one part tour of C++ integral arithmetic, one part foundations of arithmetic, and one part writing mathematical proofs procedurally.

#cplusplus #cpp #programming #mathematics #math

1665462168

## Riemann integral vs. Lebesgue integral | Clearly Explained

In this tutorial,  explain the differences between the Riemann integral and the Lebesgue integral in a demonstrative way.

In the branch of mathematics known as real analysis, the Riemann integral, created by Bernhard Riemann, was the first rigorous definition of the integral of a function on an interval. It was presented to the faculty at the University of Göttingen in 1854, but not published in a journal until 1868. For many functions and practical applications, the Riemann integral can be evaluated by the fundamental theorem of calculus or approximated by numerical integration. - wikipedia

Lebesgue integration is an alternative way of defining the integral in terms of measure theory that is used to integrate a much broader class of functions than the Riemann integral or even the Riemann-Stieltjes integral. The idea behind the Lebesgue integral is that instead of approximating the total area by dividing it into vertical strips, one approximates the total area by dividing it into horizontal strips. This corresponds to asking "for each yy-value, how many xx-values produce this value?" as opposed to asking "for each xx-value, what yy-value does it produce?"

(This explanation fits to lectures for students in their first year of study: Mathematics for physicists, Mathematics for the natural science, Mathematics for engineers and so on)

I hope that this helps students, pupils and others.

0:00 Introduction
0:30 Riemann integral
2:00 Problems of Riemann integral
7:50 Riemann integral definition
9:13 Lebesgue integral - idea