The two most popular deep-learning frameworks are TensorFlow and PyTorch. Both of them support NVIDIA GPU acceleration via the CUDA toolkit. Since Apple doesn’t support NVIDIA GPUs, until now, Apple users were left with machine learning (ML) on CPU only, which markedly limited the speed of training ML models.

With Macs powered by the new M1 chip, and the ML Compute framework available in macOS Big Sur, neural networks can now be trained right on the Macs with a massive performance improvement.

According to the recent Apple blog:

“The new tensorflow_macos fork of TensorFlow 2.4 leverages ML Compute to enable machine learning libraries to take full advantage of not only the CPU, but also the GPU in both M1- and Intel-powered Macs for dramatically faster training performance. This starts by applying higher-level optimizations such as fusing layers, selecting the appropriate device type and compiling and executing the graph as primitives that are accelerated by BNNS on the CPU and Metal Performance Shaders on the GPU.”

Since I got the new M1 Mac Mini last week, I decided to try one of my TensorFlow scripts using the new Apple framework. I installed the tensorflow_macos on Mac Mini according to the Apple GitHub site instructions and used the following code to classify items from the fashion-MNIST dataset.

#m1 #mac-mini #apple #tensorflow #machine-learning

M1 Mac Mini Scores Higher Than My NVIDIA RTX 2080Ti in TensorFlow Speed Test.
4.35 GEEK