1626429236

Deep learning owes a lot of its success to automatic differentiation. Popular libraries such as TensorFlow and PyTorch keep track of gradients over neural network parameters during training with both comprising high-level APIs for implementing the commonly used neural network functionality for deep learning. JAX is NumPy on the CPU, GPU, and TPU, with great automatic differentiation for high-performance machine learning research. Along with a Deep Learning framework, JAX has created a super polished linear algebra library with automatic differentiation and XLA support.

Read more: https://analyticsindiamag.com/jax-vs-tensorflow-vs-pytorch-a-comparative-analysis/

#tensorflow #pytorch

1626429236

Deep learning owes a lot of its success to automatic differentiation. Popular libraries such as TensorFlow and PyTorch keep track of gradients over neural network parameters during training with both comprising high-level APIs for implementing the commonly used neural network functionality for deep learning. JAX is NumPy on the CPU, GPU, and TPU, with great automatic differentiation for high-performance machine learning research. Along with a Deep Learning framework, JAX has created a super polished linear algebra library with automatic differentiation and XLA support.

Read more: https://analyticsindiamag.com/jax-vs-tensorflow-vs-pytorch-a-comparative-analysis/

#tensorflow #pytorch

1647415674

https://www.youtube.com/playlist?list=PLxqBkZuBynVRnkwNgULYmJJs_JQZOAqpU

#ComputerVision #OpenCV #MachineLearning #imageprocessing #DataScience #TensorFlow #DeepLearning #Python #DataScientist #Statistics #ArtificialIntelligence #100DaysOfMLCode #Pytorch

***********************************

Playlist of 12 Videos - Deep Learning / Computer Vision Algorithm Implementations

👉 https://www.youtube.com/playlist?list=PLxqBkZuBynVRyOJs4RWmB_fKlOVe5S8CR

#ComputerVision #Pytorch #MachineLearning #imageprocessing #DataScience #TensorFlow #DeepLearning #Python #DataScientist #Statistics #ArtificialIntelligence #100DaysOfMLCode

👉 Github Repo (Numbered) - https://github.com/rohan-paul/MachineLearning-DeepLearning-Code-for-my-YouTube-Channel

👉 Blog - https://rohan-paul-ai.netlify.app/blog

You can find me here:

**********************************************

🐦 TWITTER: https://twitter.com/paulr_rohan

👨🔧 Kaggle: https://www.kaggle.com/paulrohan2020

👨🏻💼 LINKEDIN: https://www.linkedin.com/in/rohan-paul-b27285129/

👨💻 GITHUB: https://github.com/rohan-paul

🦾🤖: My Website and Blog: https://rohan-paul-ai.netlify.app/

🧑🦰 Facebook Page: https://www.facebook.com/Computer-Vision-with-Rohan-Paul-109348958325690

📸 Instagram: https://www.instagram.com/rohan_paul_2020/

**********************************************

1641319680

Flexible and powerful tensor operations for readable and reliable code. Supports numpy, pytorch, tensorflow, jax, and others.

- torch.jit.script is supported for pytorch layers
- powerful EinMix added to einops. Einmix tutorial notebook

In case you need convincing arguments for setting aside time to learn about einsum and einops... Tim Rocktäschel, FAIR

Writing better code with PyTorch and einops 👌 Andrej Karpathy, AI at Tesla

Slowly but surely, einops is seeping in to every nook and cranny of my code. If you find yourself shuffling around bazillion dimensional tensors, this might change your life Nasim Rahaman, MILA (Montreal)

- Installation
- Documentation
- Tutorial
- API micro-reference
- Why using einops
- Supported frameworks
- Contributing
- Repository and discussions

Plain and simple:

`pip install einops`

Tutorials are the most convenient way to see `einops`

in action

- part 1: einops fundamentals
- part 2: einops for deep learning
- part 3: improve pytorch code with einops

`einops`

has a minimalistic yet powerful API.

Three operations provided (einops tutorial shows those cover stacking, reshape, transposition, squeeze/unsqueeze, repeat, tile, concatenate, view and numerous reductions)

```
from einops import rearrange, reduce, repeat
# rearrange elements according to the pattern
output_tensor = rearrange(input_tensor, 't b c -> b c t')
# combine rearrangement and reduction
output_tensor = reduce(input_tensor, 'b c (h h2) (w w2) -> b h w c', 'mean', h2=2, w2=2)
# copy along a new axis
output_tensor = repeat(input_tensor, 'h w -> h w c', c=3)
```

And two corresponding layers (`einops`

keeps a separate version for each framework) with the same API.

```
from einops.layers.chainer import Rearrange, Reduce
from einops.layers.gluon import Rearrange, Reduce
from einops.layers.keras import Rearrange, Reduce
from einops.layers.torch import Rearrange, Reduce
from einops.layers.tensorflow import Rearrange, Reduce
```

Layers behave similarly to operations and have the same parameters (with the exception of the first argument, which is passed during call)

```
layer = Rearrange(pattern, **axes_lengths)
layer = Reduce(pattern, reduction, **axes_lengths)
# apply created layer to a tensor / variable
x = layer(x)
```

Example of using layers within a model:

```
# example given for pytorch, but code in other frameworks is almost identical
from torch.nn import Sequential, Conv2d, MaxPool2d, Linear, ReLU
from einops.layers.torch import Rearrange
model = Sequential(
Conv2d(3, 6, kernel_size=5),
MaxPool2d(kernel_size=2),
Conv2d(6, 16, kernel_size=5),
MaxPool2d(kernel_size=2),
# flattening
Rearrange('b c h w -> b (c h w)'),
Linear(16*5*5, 120),
ReLU(),
Linear(120, 10),
)
```

`einops`

stands for Einstein-Inspired Notation for operations (though "Einstein operations" is more attractive and easier to remember).

Notation was loosely inspired by Einstein summation (in particular by `numpy.einsum`

operation).

`einops`

notation?!```
y = x.view(x.shape[0], -1)
y = rearrange(x, 'b c h w -> b (c h w)')
```

While these two lines are doing the same job in *some* context, the second one provides information about the input and output. In other words, `einops`

focuses on interface: *what is the input and output*, not *how* the output is computed.

The next operation looks similar:

```
y = rearrange(x, 'time c h w -> time (c h w)')
```

but it gives the reader a hint: this is not an independent batch of images we are processing, but rather a sequence (video).

Semantic information makes the code easier to read and maintain.

Reconsider the same example:

```
y = x.view(x.shape[0], -1) # x: (batch, 256, 19, 19)
y = rearrange(x, 'b c h w -> b (c h w)')
```

The second line checks that the input has four dimensions, but you can also specify particular dimensions. That's opposed to just writing comments about shapes since comments don't work and don't prevent mistakes as we know

```
y = x.view(x.shape[0], -1) # x: (batch, 256, 19, 19)
y = rearrange(x, 'b c h w -> b (c h w)', c=256, h=19, w=19)
```

Below we have at least two ways to define the depth-to-space operation

```
# depth-to-space
rearrange(x, 'b c (h h2) (w w2) -> b (c h2 w2) h w', h2=2, w2=2)
rearrange(x, 'b c (h h2) (w w2) -> b (h2 w2 c) h w', h2=2, w2=2)
```

There are at least four more ways to do it. Which one is used by the framework?

These details are ignored, since *usually* it makes no difference, but it can make a big difference (e.g. if you use grouped convolutions in the next stage), and you'd like to specify this in your code.

```
reduce(x, 'b c (x dx) -> b c x', 'max', dx=2)
reduce(x, 'b c (x dx) (y dy) -> b c x y', 'max', dx=2, dy=3)
reduce(x, 'b c (x dx) (y dy) (z dz) -> b c x y z', 'max', dx=2, dy=3, dz=4)
```

These examples demonstrated that we don't use separate operations for 1d/2d/3d pooling, those are all defined in a uniform way.

Space-to-depth and depth-to space are defined in many frameworks but how about width-to-height? Here you go:

```
rearrange(x, 'b c h (w w2) -> b c (h w2) w', w2=2)
```

Even simple functions are defined differently by different frameworks

```
y = x.flatten() # or flatten(x)
```

Suppose `x`

's shape was `(3, 4, 5)`

, then `y`

has shape ...

- numpy, cupy, chainer, pytorch:
`(60,)`

- keras, tensorflow.layers, mxnet and gluon:
`(3, 20)`

`einops`

works the same way in all frameworks.

Example: `tile`

vs `repeat`

causes lots of confusion. To copy image along width:

```
np.tile(image, (1, 2)) # in numpy
image.repeat(1, 2) # pytorch's repeat ~ numpy's tile
```

With einops you don't need to decipher which axis was repeated:

```
repeat(image, 'h w -> h (tile w)', tile=2) # in numpy
repeat(image, 'h w -> h (tile w)', tile=2) # in pytorch
repeat(image, 'h w -> h (tile w)', tile=2) # in tf
repeat(image, 'h w -> h (tile w)', tile=2) # in jax
repeat(image, 'h w -> h (tile w)', tile=2) # in mxnet
... (etc.)
```

Testimonials provide user's perspective on the same question.

Einops works with ...

Best ways to contribute are

- spread the word about
`einops`

- if you like explaining things, more tutorials/tear-downs of implementations is welcome
- tutorials in other languages are very welcome
- do you have project/code example to share? Let me know in github discussions
- use
`einops`

in your papers!

`einops`

works with python 3.6 or later.

Download Details:

Author: Arogozhnikov

Source Code: https://github.com/arogozhnikov/einops

License: MIT License

1623745500

- PyTorch gives our researchers unprecedented flexibility in designing their models and running their experiments.

Google’s TensorFlow and Facebook’s PyTorch are the most popular machine learning frameworks. The former has a two-year head start over PyTorch (released in 2016). TensorFlow’s popularity reportedly declined after PyTorch bursted into the scene. However, Google released a more user-friendly TensorFlow 2.0 in January 2019 to recover lost ground.

**Register for Hands-on Workshop (17th Jun) - oneAPI AI Analytics Toolkit**

Interest over time for TensorFlow (top) and PyTorch (bottom) in India (Credit: Google Trends)

PyTorch–a framework for deep learning that integrates with important Python add-ons like NumPy and data-science tasks that require faster GPU processing–made some recent additions:

- Enterprise support**: **After taking over the Windows 10 PyTorch library from Facebook to boost GPU-accelerated machine learning training on Windows 10’s Subsystem for Linux(WSL), Microsoft recently added enterprise support for PyTorch AI on Azure to give PyTorch users a more reliable production experience. “This new enterprise-level offering by Microsoft closes an important gap. PyTorch gives our researchers unprecedented flexibility in designing their models and running their experiments,” Jeremy Jancsary, a senior principal research scientist at Nuance, said.
- PyTorchVideois a deep learning library for video understanding unveiled by Facebook AI recently. The source code is available on GitHub. With this, Facebook aims to support researchers develop cutting-edge machine learning models and tools. These models can enhance video understanding capabilities along with providing a unified repository of reproducible and efficient video understanding components for research and production applications.
- PyTorch Profiler: In April this year, PyTorch announced its new performance debug profiler, PyTorch Profiler, along with its 1.8.1 version release. The new tool enables accurate and efficient performance analysis in large scale deep learning models.

#opinions #deep learning frameworks #machine learning pytorch #open-source frameworks #pytorch #tensorflow #tensorflow 2.0

1610436416

Learn how to set up anaconda environments for different versions of CUDA, TensorFlow, and PyTorch

It’s a real shame that the first experience that most people have with deep learning is having to spend days trying to figure out why the model they downloaded off of GitHub just… won’t… run….

Dependency issues are incredibly common when trying to run an off-the-shelf model. The most problematic of which is needing to have the correct version of CUDA for TensorFlow. TensorFlow has been prominent for a number of years meaning that even new models that are released could use an old version of TensorFlow. This wouldn’t be an issue except that it feels like every version of TensorFlow needs a specific version of CUDA where anything else is incompatible. Sadly, installing multiple versions of CUDA on the same machine can be a real pain!

#machine-learning #pytorch #tensorflow #pytorch