1686148596
This course will give you an introduction to machine learning concepts and neural network implementation using Python and TensorFlow. Kylie Ying explains basic concepts, such as classification, regression, training/validation/test datasets, loss functions, neural networks, and model training. She then demonstrates how to implement a feedforward neural network to predict whether someone has diabetes, as well as two different neural net architectures to classify wine reviews.
⭐️ Course Contents ⭐️
⌨️ (0:00:00) Introduction
⌨️ (0:00:34) Colab intro (importing wine dataset)
⌨️ (0:07:48) What is machine learning?
⌨️ (0:14:00) Features (inputs)
⌨️ (0:20:22) Outputs (predictions)
⌨️ (0:25:05) Anatomy of a dataset
⌨️ (0:30:22) Assessing performance
⌨️ (0:35:01) Neural nets
⌨️ (0:48:50) Tensorflow
⌨️ (0:50:45) Colab (feedforward network using diabetes dataset)
⌨️ (1:21:15) Recurrent neural networks
⌨️ (1:26:20) Colab (text classification networks using wine dataset)
⭐️ Resources ⭐️
💻 Datasets: https://drive.google.com/drive/folders/1YnxDqNIqM2Xr1Dlgv5pYsE6dYJ9MGxcM?usp=sharing
💻 Feedforward NN colab notebook: https://colab.research.google.com/drive/1UxmeNX_MaIO0ni26cg9H6mtJcRFafWiR?usp=sharing
💻 Wine review colab notebook: https://colab.research.google.com/drive/1yO7EgCYSN3KW8hzDTz809nzNmacjBBXX?usp=sharing
#python #tensorflow #datascience #machinelearning #deeplearning #ai #artificialintelligence #programming #developer #morioh #softwaredeveloper #computerscience
1686121128
Today we use Tensorflow to build a neural network, which we then use to recognize images of handwritten digits that we created ourselves. Whether you're new to machine learning or an experienced developer, follow along with this tutorial and get started with handwriting recognition today!
📁 GitHub: https://github.com/NeuralNine
🎵 Outro Music From: https://www.bensound.com/
Subscribe : https://www.youtube.com/channel/UC8wZnXYK_CGKlBcZp-GxYPA
1686107826
Learn Machine Learning in a way that is accessible to absolute beginners. You will learn the basics of Machine Learning and how to use TensorFlow to implement many different concepts.
⭐️ Contents ⭐️
⌨️ (0:00:00) Intro
⌨️ (0:00:58) Data/Colab Intro
⌨️ (0:08:45) Intro to Machine Learning
⌨️ (0:12:26) Features
⌨️ (0:17:23) Classification/Regression
⌨️ (0:19:57) Training Model
⌨️ (0:30:57) Preparing Data
⌨️ (0:44:43) K-Nearest Neighbors
⌨️ (0:52:42) KNN Implementation
⌨️ (1:08:43) Naive Bayes
⌨️ (1:17:30) Naive Bayes Implementation
⌨️ (1:19:22) Logistic Regression
⌨️ (1:27:56) Log Regression Implementation
⌨️ (1:29:13) Support Vector Machine
⌨️ (1:37:54) SVM Implementation
⌨️ (1:39:44) Neural Networks
⌨️ (1:47:57) Tensorflow
⌨️ (1:49:50) Classification NN using Tensorflow
⌨️ (2:10:12) Linear Regression
⌨️ (2:34:54) Lin Regression Implementation
⌨️ (2:57:44) Lin Regression using a Neuron
⌨️ (3:00:15) Regression NN using Tensorflow
⌨️ (3:13:13) K-Means Clustering
⌨️ (3:23:46) Principal Component Analysis
⌨️ (3:33:54) K-Means and PCA Implementations
⭐️ Code and Resources ⭐️
🔗 Supervised learning (classification/MAGIC): https://colab.research.google.com/drive/16w3TDn_tAku17mum98EWTmjaLHAJcsk0?usp=sharing
🔗 Supervised learning (regression/bikes): https://colab.research.google.com/drive/1m3oQ9b0oYOT-DXEy0JCdgWPLGllHMb4V?usp=sharing
🔗 Unsupervised learning (seeds): https://colab.research.google.com/drive/1zw_6ZnFPCCh6mWDAd_VBMZB4VkC3ys2q?usp=sharing
🔗 Dataets (add a note that for the bikes dataset, they may have to open the downloaded csv file and remove special characters)
🔗 MAGIC dataset: https://archive.ics.uci.edu/ml/datasets/MAGIC+Gamma+Telescope
🔗 Bikes dataset: https://archive.ics.uci.edu/ml/datasets/Seoul+Bike+Sharing+Demand
🔗 Seeds/wheat dataset: https://archive.ics.uci.edu/ml/datasets/seeds
#tensorflow #python #datascience #machinelearning #deeplearning #ai #artificialintelligence #programming #developer #morioh #softwaredeveloper #computerscience
1686077040
Open Neural Network Exchange (ONNX) is an open ecosystem that empowers AI developers to choose the right tools as their project evolves. ONNX provides an open source format for AI models, both deep learning and traditional ML. It defines an extensible computation graph model, as well as definitions of built-in operators and standard data types. Currently we focus on the capabilities needed for inferencing (scoring).
ONNX is widely supported and can be found in many frameworks, tools, and hardware. Enabling interoperability between different frameworks and streamlining the path from research to production helps increase the speed of innovation in the AI community. We invite the community to join us and further evolve ONNX.
ONNX released packages are published in PyPi.
pip install onnx
ONNX weekly packages are published in PyPI to enable experimentation and early testing.
onnx is in the maintenance list of vcpkg, you can easily use vcpkg to build and install it.
git clone https://github.com/microsoft/vcpkg.git
cd vcpkg
./bootstrap-vcpkg.bat # For powershell
./bootstrap-vcpkg.sh # For bash
./vcpkg install onnx
A binary build of ONNX is available from Conda, in conda-forge:
conda install -c conda-forge onnx
Before building from source uninstall any existing versions of onnx pip uninstall onnx
.
c++17 or higher C++ compiler version is required to build ONNX from source on Windows. For other platforms, please use C++14 or higher versions.
Generally speaking, you need to install protobuf C/C++ libraries and tools before proceeding forward. Then depending on how you installed protobuf, you need to set environment variable CMAKE_ARGS to "-DONNX_USE_PROTOBUF_SHARED_LIBS=ON" or "-DONNX_USE_PROTOBUF_SHARED_LIBS=OFF". For example, you may need to run the following command:
Linux:
export CMAKE_ARGS="-DONNX_USE_PROTOBUF_SHARED_LIBS=ON"
Windows:
set CMAKE_ARGS="-DONNX_USE_PROTOBUF_SHARED_LIBS=ON"
The ON/OFF depends on what kind of protobuf library you have. Shared libraries are files ending with *.dll/*.so/*.dylib. Static libraries are files ending with *.a/*.lib. This option depends on how you get your protobuf library and how it was built. And it is default OFF. You don't need to run the commands above if you'd prefer to use a static protobuf library.
If you are building ONNX from source, it is recommended that you also build Protobuf locally as a static library. The version distributed with conda-forge is a DLL, but ONNX expects it to be a static library. Building protobuf locally also lets you control the version of protobuf. The tested and recommended version is 3.20.2.
The instructions in this README assume you are using Visual Studio. It is recommended that you run all the commands from a shell started from "x64 Native Tools Command Prompt for VS 2019" and keep the build system generator for cmake (e.g., cmake -G "Visual Studio 16 2019") consistent while building protobuf as well as ONNX.
You can get protobuf by running the following commands:
git clone https://github.com/protocolbuffers/protobuf.git
cd protobuf
git checkout v3.20.2
cd cmake
cmake -G "Visual Studio 16 2019" -A x64 -DCMAKE_INSTALL_PREFIX=<protobuf_install_dir> -Dprotobuf_MSVC_STATIC_RUNTIME=OFF -Dprotobuf_BUILD_SHARED_LIBS=OFF -Dprotobuf_BUILD_TESTS=OFF -Dprotobuf_BUILD_EXAMPLES=OFF .
msbuild protobuf.sln /m /p:Configuration=Release
msbuild INSTALL.vcxproj /p:Configuration=Release
Then it will be built as a static library and installed to . Please add the bin directory(which contains protoc.exe) to your PATH.
set PATH=<protobuf_install_dir>/bin;%PATH%
Please note: if your protobuf_install_dir contains spaces, do not add quotation marks around it.
Alternative: if you don't want to change your PATH, you can set ONNX_PROTOC_EXECUTABLE instead.
set CMAKE_ARGS=-DONNX_PROTOC_EXECUTABLE=<full_path_to_protoc.exe>
Then you can build ONNX as:
git clone https://github.com/onnx/onnx.git
cd onnx
git submodule update --init --recursive
# prefer lite proto
set CMAKE_ARGS=-DONNX_USE_LITE_PROTO=ON
pip install -e .
First, you need to install protobuf. The minimum Protobuf compiler (protoc) version required by ONNX is 3.6.1. Please note that old protoc versions might not work with CMAKE_ARGS=-DONNX_USE_LITE_PROTO=ON
.
Ubuntu 20.04 (and newer) users may choose to install protobuf via
apt-get install python3-pip python3-dev libprotobuf-dev protobuf-compiler
In this case, it is required to add -DONNX_USE_PROTOBUF_SHARED_LIBS=ON
to CMAKE_ARGS in the ONNX build step.
A more general way is to build and install it from source. See the instructions below for more details.
Installing Protobuf from source
Debian/Ubuntu:
git clone https://github.com/protocolbuffers/protobuf.git
cd protobuf
git checkout v3.20.2
git submodule update --init --recursive
mkdir build_source && cd build_source
cmake ../cmake -Dprotobuf_BUILD_SHARED_LIBS=OFF -DCMAKE_INSTALL_PREFIX=/usr -DCMAKE_INSTALL_SYSCONFDIR=/etc -DCMAKE_POSITION_INDEPENDENT_CODE=ON -Dprotobuf_BUILD_TESTS=OFF -DCMAKE_BUILD_TYPE=Release
make -j$(nproc)
make install
CentOS/RHEL/Fedora:
git clone https://github.com/protocolbuffers/protobuf.git
cd protobuf
git checkout v3.20.2
git submodule update --init --recursive
mkdir build_source && cd build_source
cmake ../cmake -DCMAKE_INSTALL_LIBDIR=lib64 -Dprotobuf_BUILD_SHARED_LIBS=OFF -DCMAKE_INSTALL_PREFIX=/usr -DCMAKE_INSTALL_SYSCONFDIR=/etc -DCMAKE_POSITION_INDEPENDENT_CODE=ON -Dprotobuf_BUILD_TESTS=OFF -DCMAKE_BUILD_TYPE=Release
make -j$(nproc)
make install
Here "-DCMAKE_POSITION_INDEPENDENT_CODE=ON" is crucial. By default static libraries are built without "-fPIC" flag, they are not position independent code. But shared libraries must be position independent code. Python C/C++ extensions(like ONNX) are shared libraries. So if a static library was not built with "-fPIC", it can't be linked to such a shared library.
Once build is successful, update PATH to include protobuf paths.
Then you can build ONNX as:
git clone https://github.com/onnx/onnx.git
cd onnx
git submodule update --init --recursive
# Optional: prefer lite proto
export CMAKE_ARGS=-DONNX_USE_LITE_PROTO=ON
pip install -e .
export NUM_CORES=`sysctl -n hw.ncpu`
brew update
brew install autoconf && brew install automake
wget https://github.com/protocolbuffers/protobuf/releases/download/v3.20.2/protobuf-cpp-3.20.2.tar.gz
tar -xvf protobuf-cpp-3.20.2.tar.gz
cd protobuf-3.20.2
mkdir build_source && cd build_source
cmake ../cmake -Dprotobuf_BUILD_SHARED_LIBS=OFF -DCMAKE_POSITION_INDEPENDENT_CODE=ON -Dprotobuf_BUILD_TESTS=OFF -DCMAKE_BUILD_TYPE=Release
make -j${NUM_CORES}
make install
Once build is successful, update PATH to include protobuf paths.
Then you can build ONNX as:
git clone --recursive https://github.com/onnx/onnx.git
cd onnx
# Optional: prefer lite proto
set CMAKE_ARGS=-DONNX_USE_LITE_PROTO=ON
pip install -e .
After installation, run
python -c "import onnx"
to verify it works.
For full list refer to CMakeLists.txt
USE_MSVC_STATIC_RUNTIME
should be 1 or 0, not ON or OFF. When set to 1 onnx links statically to runtime library.Default: USE_MSVC_STATIC_RUNTIME=0
DEBUG
should be 0 or 1. When set to 1 onnx is built in debug mode. or debug versions of the dependencies, you need to open the CMakeLists file and append a letter d
at the end of the package name lines. For example, NAMES protobuf-lite
would become NAMES protobuf-lited
.Default: Debug=0
ONNX_USE_PROTOBUF_SHARED_LIBS
should be ON
or OFF
.Default: ONNX_USE_PROTOBUF_SHARED_LIBS=OFF USE_MSVC_STATIC_RUNTIME=0
ONNX_USE_PROTOBUF_SHARED_LIBS
determines how onnx links to protobuf libraries.
When set to ON
- onnx will dynamically link to protobuf shared libs, PROTOBUF_USE_DLLS will be defined as described here, Protobuf_USE_STATIC_LIBS will be set to OFF
and USE_MSVC_STATIC_RUNTIME
must be 0.
When set to OFF
- onnx will link statically to protobuf, and Protobuf_USE_STATIC_LIBS will be set to ON
(to force the use of the static libraries) and USE_MSVC_STATIC_RUNTIME
can be 0
or 1
.
ONNX_USE_LITE_PROTO
should be ON
or OFF
. When set to ON
onnx uses lite protobuf instead of full protobuf.
Default: ONNX_USE_LITE_PROTO=OFF
ONNX_WERROR
should be ON
or OFF
. When set to ON
warnings are treated as errors.Default: ONNX_WERROR=OFF
in local builds, ON
in CI and release pipelines.
Note: the import onnx
command does not work from the source checkout directory; in this case you'll see ModuleNotFoundError: No module named 'onnx.onnx_cpp2py_export'
. Change into another directory to fix this error.
If you run into any issues while building Protobuf as a static library, please ensure that shared Protobuf libraries, like libprotobuf, are not installed on your device or in the conda environment. If these shared libraries exist, either remove them to build Protobuf from source as a static library, or skip the Protobuf build from source to use the shared version directly.
If you run into any issues while building ONNX from source, and your error message reads, Could not find pythonXX.lib
, ensure that you have consistent Python versions for common commands, such as python
and pip
. Clean all existing build files and rebuild ONNX again.
Testing
ONNX uses pytest as test driver. In order to run tests, you will first need to install pytest
:
pip install pytest nbval
After installing pytest, use the following command to run tests.
pytest
Development
Check out the contributor guide for instructions.
Use ONNX
Learn about the ONNX spec
Programming utilities for working with ONNX Graphs
Contribute
ONNX is a community project and the open governance model is described here. We encourage you to join the effort and contribute feedback, ideas, and code. You can participate in the Special Interest Groups and Working Groups to shape the future of ONNX.
Check out our contribution guide to get started.
If you think some operator should be added to ONNX specification, please read this document.
Community meetings
The schedules of the regular meetings of the Steering Committee, the working groups and the SIGs can be found here
Community Meetups are held at least once a year. Content from previous community meetups are at:
Discuss
We encourage you to open Issues, or use Slack (If you have not joined yet, please use this link to join the group) for more real-time discussion.
Follow Us
Stay up to date with the latest ONNX news. [Facebook] [Twitter]
Roadmap
A roadmap process takes place every year. More details can be found here
Author: onnx
Source Code: https://github.com/onnx/onnx
License: Apache-2.0 license
1686062364
Learn the basics of computer vision with deep learning and how to implement the algorithms using Tensorflow.
⭐️ Contents ⭐️
Introduction
⌨️ (0:00:00) Welcome
⌨️ (0:05:54) Prerequisite
⌨️ (0:06:11) What we shall Learn
Tensors and Variables
⌨️ (0:12:12) Basics
⌨️ (0:19:26) Initialization and Casting
⌨️ (1:07:31) Indexing
⌨️ (1:16:15) Maths Operations
⌨️ (1:55:02) Linear Algebra Operations
⌨️ (2:56:21) Common TensorFlow Functions
⌨️ (3:50:15) Ragged Tensors
⌨️ (4:01:41) Sparse Tensors
⌨️ (4:04:23) String Tensors
⌨️ (4:07:45) Variables
Building Neural Networks with TensorFlow [Car Price Prediction]
⌨️ (4:14:52) Task Understanding
⌨️ (4:19:47) Data Preparation
⌨️ (4:54:47) Linear Regression Model
⌨️ (5:10:18) Error Sanctioning
⌨️ (5:24:53) Training and Optimization
⌨️ (5:41:22) Performance Measurement
⌨️ (5:44:18) Validation and Testing
⌨️ (6:04:30) Corrective Measures
Building Convolutional Neural Networks with TensorFlow [Malaria Diagnosis]
⌨️ (6:28:50) Task Understanding
⌨️ (6:37:40) Data Preparation
⌨️ (6:57:40) Data Visualization
⌨️ (7:00:20) Data Processing
⌨️ (7:08:50) How and Why ConvNets Work
⌨️ (7:56:15) Building Convnets with TensorFlow
⌨️ (8:02:39) Binary Crossentropy Loss
⌨️ (8:10:15) Training Convnets
⌨️ (8:23:33) Model Evaluation and Testing
⌨️ (8:29:15) Loading and Saving Models to Google Drive
Building More Advanced Models in Teno Convolutional Neural Networks with TensorFlow [Malaria Diagnosis]
⌨️ (8:47:10) Functional API
⌨️ (9:03:48) Model Subclassing
⌨️ (9:19:05) Custom Layers
Evaluating Classification Models [Malaria Diagnosis]
⌨️ (9:36:45) Precision, Recall and Accuracy
⌨️ (10:00:35) Confusion Matrix
⌨️ (10:10:10) ROC Plots
Improving Model Performance [Malaria Diagnosis]
⌨️ (10:18:10) TensorFlow Callbacks
⌨️ (10:43:55) Learning Rate Scheduling
⌨️ (11:01:25) Model Checkpointing
⌨️ (11:09:25) Mitigating Overfitting and Underfitting
Data Augmentation [Malaria Diagnosis]
⌨️ (11:38:50) Augmentation with tf.image and Keras Layers
⌨️ (12:38:00) Mixup Augmentation
⌨️ (12:56:35) Cutmix Augmentation
⌨️ (13:38:30) Data Augmentation with Albumentations
Advanced TensorFlow Topics [Malaria Diagnosis]
⌨️ (13:58:35) Custom Loss and Metrics
⌨️ (14:18:30) Eager and Graph Modes
⌨️ (14:31:23) Custom Training Loops
Tensorboard Integration [Malaria Diagnosis]
⌨️ (14:57:00) Data Logging
⌨️ (15:29:00) View Model Graphs
⌨️ (15:31:45) Hyperparameter Tuning
⌨️ (15:52:40) Profiling and Visualizations
MLOps with Weights and Biases [Malaria Diagnosis]
⌨️ (16:00:35) Experiment Tracking
⌨️ (16:55:02) Hyperparameter Tuning
⌨️ (17:17:15) Dataset Versioning
⌨️ (18:00:23) Model Versioning
Human Emotions Detection
⌨️ (18:16:55) Data Preparation
⌨️ (18:45:38) Modeling and Training
⌨️ (19:36:42) Data Augmentation
⌨️ (19:54:30) TensorFlow Records
Modern Convolutional Neural Networks [Human Emotions Detection]
⌨️ (20:31:25) AlexNet
⌨️ (20:48:35) VGGNet
⌨️ (20:59:50) ResNet
⌨️ (21:34:07) Coding ResNet from Scratch
⌨️ (21:56:17) MobileNet
⌨️ (22:20:43) EfficientNet
Transfer Learning [Human Emotions Detection]
⌨️ (22:38:15) Feature Extraction
⌨️ (23:02:25) Finetuning
Understanding the Blackbox [Human Emotions Detection]
⌨️ (23:15:33) Visualizing Intermediate Layers
⌨️ (23:36:20) Gradcam method
Transformers in Vision [Human Emotions Detection]
⌨️ (23:57:35) Understanding ViTs
⌨️ (24:51:17) Building ViTs from Scratch
⌨️ (25:42:39) FineTuning Huggingface ViT
⌨️ (26:05:52) Model Evaluation with Wandb
Model Deployment [Human Emotions Detection]
⌨️ (26:27:13) Converting TensorFlow Model to Onnx format
⌨️ (26:52:26) Understanding Quantization
⌨️ (27:13:08) Practical Quantization of Onnx Model
⌨️ (27:22:01) Quantization Aware Training
⌨️ (27:39:55) Conversion to TensorFlow Lite
⌨️ (27:58:28) How APIs work
⌨️ (28:18:28) Building an API with FastAPI
⌨️ (29:39:10) Deploying API to the Cloud
⌨️ (29:51:35) Load Testing with Locust
Object Detection with YOLO
⌨️ (30:05:29) Introduction to Object Detection
⌨️ (30:11:39) Understanding YOLO Algorithm
⌨️ (31:15:17) Dataset Preparation
⌨️ (31:58:27) YOLO Loss
⌨️ (33:02:58) Data Augmentation
⌨️ (33:27:33) Testing
Image Generation
⌨️ (33:59:28) Introduction to Image Generation
⌨️ (34:03:18) Understanding Variational Autoencoders
⌨️ (34:20:46) VAE Training and Digit Generation
⌨️ (35:06:05) Latent Space Visualization
⌨️ (35:21:36) How GANs work
⌨️ (35:43:30) The GAN Loss
⌨️ (36:01:38) Improving GAN Training
⌨️ (36:25:02) Face Generation with GANs
Conclusion
⌨️ (37:15:45) What's Next
Link to Code: https://colab.research.google.com/drive/18u1KDx-9683iZNPxSDZ6dOv9319ZuEC_
#tensorflow #python #computervision #opencv #algorithms #datascience #machinelearning #deeplearning #ai #artificialintelligence #programming #developer #morioh #softwaredeveloper #computerscience
1686060600
squad_dataset = load_dataset("squad")
, get any of these datasets ready to use in a dataloader for training/evaluating a ML model (Numpy/Pandas/PyTorch/TensorFlow/JAX),processed_dataset = dataset.map(process_example)
, efficiently prepare the dataset for inspection and ML model evaluation and training.🤗 Datasets is designed to let the community easily add and share new datasets.
🤗 Datasets has many additional interesting features:
🤗 Datasets originated from a fork of the awesome TensorFlow Datasets and the HuggingFace team want to deeply thank the TensorFlow Datasets team for building this amazing library. More details on the differences between 🤗 Datasets and tfds
can be found in the section Main differences between 🤗 Datasets and tfds
.
Installation
🤗 Datasets can be installed from PyPi and has to be installed in a virtual environment (venv or conda for instance)
pip install datasets
🤗 Datasets can be installed using conda as follows:
conda install -c huggingface -c conda-forge datasets
Follow the installation pages of TensorFlow and PyTorch to see how to install them with conda.
For more details on installation, check the installation page in the documentation: https://huggingface.co/docs/datasets/installation
If you plan to use 🤗 Datasets with PyTorch (1.0+), TensorFlow (2.2+) or pandas, you should also install PyTorch, TensorFlow or pandas.
For more details on using the library with NumPy, pandas, PyTorch or TensorFlow, check the quick start page in the documentation: https://huggingface.co/docs/datasets/quickstart
Usage
🤗 Datasets is made to be very simple to use. The main methods are:
datasets.list_datasets()
to list the available datasetsdatasets.load_dataset(dataset_name, **kwargs)
to instantiate a datasetThis library can be used for text/image/audio/etc. datasets. Here is an example to load a text dataset:
Here is a quick example:
from datasets import list_datasets, load_dataset
# Print all the available datasets
print(list_datasets())
# Load a dataset and print the first example in the training set
squad_dataset = load_dataset('squad')
print(squad_dataset['train'][0])
# Process the dataset - add a column with the length of the context texts
dataset_with_length = squad_dataset.map(lambda x: {"length": len(x["context"])})
# Process the dataset - tokenize the context texts (using a tokenizer from the 🤗 Transformers library)
from transformers import AutoTokenizer
tokenizer = AutoTokenizer.from_pretrained('bert-base-cased')
tokenized_dataset = squad_dataset.map(lambda x: tokenizer(x['context']), batched=True)
If your dataset is bigger than your disk or if you don't want to wait to download the data, you can use streaming:
# If you want to use the dataset immediately and efficiently stream the data as you iterate over the dataset
image_dataset = load_dataset('cifar100', streaming=True)
for example in image_dataset["train"]:
break
For more details on using the library, check the quick start page in the documentation: https://huggingface.co/docs/datasets/quickstart.html and the specific pages on:
Another introduction to 🤗 Datasets is the tutorial on Google Colab here:
Add a new dataset to the Hub
We have a very detailed step-by-step guide to add a new dataset to the datasets already provided on the HuggingFace Datasets Hub.
You can find:
Main differences between 🤗 Datasets and tfds
If you are familiar with the great TensorFlow Datasets, here are the main differences between 🤗 Datasets and tfds
:
tf.data.Dataset
but a built-in framework-agnostic dataset class with methods inspired by what we like in tf.data
(like a map()
method). It basically wraps a memory-mapped Arrow table cache.Disclaimers
Similar to TensorFlow Datasets, 🤗 Datasets is a utility library that downloads and prepares public datasets. We do not host or distribute most of these datasets, vouch for their quality or fairness, or claim that you have license to use them. It is your responsibility to determine whether you have permission to use the dataset under the dataset's license.
Moreover 🤗 Datasets may run Python code defined by the dataset authors to parse certain data formats or structures. For security reasons, we ask users to:
revision
of the repositories they use.If you're a dataset owner and wish to update any part of it (description, citation, license, etc.), or do not want your dataset to be included in the Hugging Face Hub, please get in touch by opening a discussion or a pull request in the Community tab of the dataset page. Thanks for your contribution to the ML community!
If you want to cite our 🤗 Datasets library, you can use our paper:
@inproceedings{lhoest-etal-2021-datasets,
title = "Datasets: A Community Library for Natural Language Processing",
author = "Lhoest, Quentin and
Villanova del Moral, Albert and
Jernite, Yacine and
Thakur, Abhishek and
von Platen, Patrick and
Patil, Suraj and
Chaumond, Julien and
Drame, Mariama and
Plu, Julien and
Tunstall, Lewis and
Davison, Joe and
{\v{S}}a{\v{s}}ko, Mario and
Chhablani, Gunjan and
Malik, Bhavitvya and
Brandeis, Simon and
Le Scao, Teven and
Sanh, Victor and
Xu, Canwen and
Patry, Nicolas and
McMillan-Major, Angelina and
Schmid, Philipp and
Gugger, Sylvain and
Delangue, Cl{\'e}ment and
Matussi{\`e}re, Th{\'e}o and
Debut, Lysandre and
Bekman, Stas and
Cistac, Pierric and
Goehringer, Thibault and
Mustar, Victor and
Lagunas, Fran{\c{c}}ois and
Rush, Alexander and
Wolf, Thomas",
booktitle = "Proceedings of the 2021 Conference on Empirical Methods in Natural Language Processing: System Demonstrations",
month = nov,
year = "2021",
address = "Online and Punta Cana, Dominican Republic",
publisher = "Association for Computational Linguistics",
url = "https://aclanthology.org/2021.emnlp-demo.21",
pages = "175--184",
abstract = "The scale, variety, and quantity of publicly-available NLP datasets has grown rapidly as researchers propose new tasks, larger models, and novel benchmarks. Datasets is a community library for contemporary NLP designed to support this ecosystem. Datasets aims to standardize end-user interfaces, versioning, and documentation, while providing a lightweight front-end that behaves similarly for small datasets as for internet-scale corpora. The design of the library incorporates a distributed, community-driven approach to adding datasets and documenting usage. After a year of development, the library now includes more than 650 unique datasets, has more than 250 contributors, and has helped support a variety of novel cross-dataset research projects and shared tasks. The library is available at https://github.com/huggingface/datasets.",
eprint={2109.02846},
archivePrefix={arXiv},
primaryClass={cs.CL},
}
If you need to cite a specific version of our 🤗 Datasets library for reproducibility, you can use the corresponding version Zenodo DOI from this list.
🎓 Documentation 🕹 Colab tutorial
🔎 Find a dataset in the Hub 🌟 Add a new dataset to the Hub
Author: Huggingface
Source Code: https://github.com/huggingface/datasets
License: Apache-2.0 license
#machinelearning #nlp #computervision #deeplearning #tensorflow #numpy
1685992620
Ray is a unified framework for scaling AI and Python applications. Ray consists of a core distributed runtime and a toolkit of libraries (Ray AIR) for simplifying ML compute:
Learn more about Ray AIR and its libraries:
Or more about Ray Core and its key abstractions:
Monitor and debug Ray applications and clusters using the Ray dashboard.
Ray runs on any machine, cluster, cloud provider, and Kubernetes, and features a growing ecosystem of community integrations.
Install Ray with: pip install ray
. For nightly wheels, see the Installation page.
Today's ML workloads are increasingly compute-intensive. As convenient as they are, single-node development environments such as your laptop cannot scale to meet these demands.
Ray is a unified way to scale Python and AI applications from a laptop to a cluster.
With Ray, you can seamlessly scale the same code from a laptop to a cluster. Ray is designed to be general-purpose, meaning that it can performantly run any kind of workload. If your application is written in Python, you can scale it with Ray, no other infrastructure required.
Older documents:
Platform | Purpose | Estimated Response Time | Support Level |
---|---|---|---|
Discourse Forum | For discussions about development and questions about usage. | < 1 day | Community |
GitHub Issues | For reporting bugs and filing feature requests. | < 2 days | Ray OSS Team |
Slack | For collaborating with other Ray users. | < 2 days | Community |
StackOverflow | For asking questions about how to use Ray. | 3-5 days | Community |
Meetup Group | For learning about Ray projects and best practices. | Monthly | Ray DevRel |
For staying up-to-date on new features. | Daily | Ray DevRel |
Author: Ray-project
Source Code: https://github.com/ray-project/ray
License: Apache-2.0 license
1685984760
PhotoPrism® is an AI-Powered Photos App for the Decentralized Web. It makes use of the latest technologies to tag and find pictures automatically without getting in your way. You can run it at home, on a private server, or in the cloud.
To get a first impression, you are welcome to play with our public demo. Be careful not to upload any private pictures.
Our mission is to provide the most user- and privacy-friendly solution to keep your pictures organized and accessible. That's why PhotoPrism was built from the ground up to run wherever you need it, without compromising freedom, privacy, or functionality:
Being completely self-funded and independent, we can promise you that we will never sell your data and that we will always be transparent about our software and services. Your data will never be shared with Google, Amazon, Microsoft or Apple unless you intentionally upload files to one of their services. 🔒
Step-by-step installation instructions for our self-hosted community edition can be found on docs.photoprism.app - all you need is a Web browser and Docker to run the server. It is available for Mac, Linux, and Windows.
The stable version and development preview have been built into a single multi-arch image for 64-bit AMD, Intel, and ARM processors. That means, Raspberry Pi 3 / 4 owners can pull from the same repository, enjoy the exact same functionality, and can follow the regular installation instructions after going through a short list of requirements.
Existing users are advised to update their docker-compose.yml
config based on our examples available at dl.photoprism.app/docker.
PhotoPrism is 100% self-funded and independent. Your continued support helps us provide more features to the public, release regular updates, and remain independent!
Our members enjoy additional features, including access to interactive world maps, and can join our private chat room to connect with our team. We currently have the following membership options:
If you currently support us through GitHub Sponsors, you can also register on our website and use the Activate GitHub Sponsors Membership button to link your account. For details on this and how to link your Patreon account, see our Activation Guide.
You are welcome to contact us for change requests, membership questions, and business partnerships.
View Membership FAQ › Sign Up ›
Please also leave a star on GitHub if you like this project. It provides additional motivation to keep going.
A big thank you to all current and past sponsors, whose generous support has been and continues to be essential to the success of the project!
View Sponsors › View Credits ›
Visit docs.photoprism.app/user-guide to learn how to sync, organize, and share your pictures. If you need help installing our software at home, you can join us on Reddit, ask in our Community Chat, or post your question in GitHub Discussions.
Common problems can be quickly diagnosed and solved using the Troubleshooting Checklists in Getting Started. Eligible members are also welcome to email us for technical support and personalized advice.
Our Project Roadmap shows what tasks are in progress and what features will be implemented next. You are invited to give ideas you like a thumbs-up, so we know what's most popular.
Be aware that we have a zero-bug policy and do our best to help users when they need support or have other questions. This comes at a price though, as we can't give exact release dates for new features. Our team receives many more requests than can be implemented, so we want to emphasize that we are in no way obligated to implement the features, enhancements, or other changes you request. We do, however, appreciate your feedback and carefully consider all requests.
Because sustained funding is key to quickly releasing new features, we encourage you to support our mission by signing up as a sponsor or purchasing a commercial license. Ultimately, that's what's best for the product and the community.
We kindly ask you not to report bugs via GitHub Issues unless you are certain to have found a fully reproducible and previously unreported issue that must be fixed directly in the app. Thank you for your careful consideration!
Follow us on Twitter and join the Community Chat to get regular updates, connect with other users, and discuss your ideas. Our Code of Conduct explains the "dos and don’ts" when interacting with other community members.
Feel free to contact us at hello@photoprism.app with anything that is on your mind. We appreciate your feedback! Due to the high volume of emails we receive, our team may be unable to get back to you immediately. We do our best to respond within five business days or less.
We welcome contributions of any kind, including blog posts, tutorials, testing, writing documentation, and pull requests. Our Developer Guide contains all the information necessary for you to get started.
PhotoPrism® is a registered trademark. By using the software and services we provide, you agree to our Terms of Service, Privacy Policy, and Code of Conduct. Docs are available under the CC BY-NC-SA 4.0 License; additional terms may apply.
Author: Photoprism
Source Code: https://github.com/photoprism/photoprism
License: View license
1685965169
This tutorial was designed for easily diving into TensorFlow, through examples. For readability, it includes both notebooks and source codes with explanation, for both TF v1 & v2.
It is suitable for beginners who want to find clear and concise examples about TensorFlow. Besides the traditional 'raw' TensorFlow implementations, you can also find the latest TensorFlow API practices (such as layers
, estimator
, dataset
, ...).
Supervised
Unsupervised
The tutorial index for TF v1 is available here: TensorFlow v1.15 Examples. Or see below for a list of the examples.
Some examples require MNIST dataset for training and testing. Don't worry, this dataset will automatically be downloaded when running examples. MNIST is a database of handwritten digits, for a quick description of that dataset, you can check this notebook.
Official Website: http://yann.lecun.com/exdb/mnist/.
To download all the examples, simply clone this repository:
git clone https://github.com/aymericdamien/TensorFlow-Examples
To run them, you also need the latest version of TensorFlow. To install it:
pip install tensorflow
or (with GPU support):
pip install tensorflow_gpu
For more details about TensorFlow installation, you can check TensorFlow Installation Guide
The tutorial index for TF v1 is available here: TensorFlow v1.15 Examples.
Supervised
Unsupervised
The following examples are coming from TFLearn, a library that provides a simplified interface for TensorFlow. You can have a look, there are many examples and pre-built operations and layers.
Update (05/16/2020): Moving all default examples to TF2. For TF v1 examples: check here.
Author: Aymericdamien
Source Code: https://github.com/aymericdamien/TensorFlow-Examples
License: View license
1685945880
This repository hosts the development of the Keras library.
Keras is a deep learning API written in Python, running on top of the machine learning platform TensorFlow. It was developed with a focus on enabling fast experimentation and providing a delightful developer experience.
The purpose of Keras is to give an unfair advantage to any developer looking to ship ML-powered apps.
Keras is:
TensorFlow 2 is an end-to-end, open-source machine learning platform. You can think of it as an infrastructure layer for differentiable programming. It combines four key abilities:
Keras is the high-level API of TensorFlow 2: an approachable, highly-productive interface for solving machine learning problems, with a focus on modern deep learning. It provides essential abstractions and building blocks for developing and shipping machine learning solutions with high iteration velocity.
Keras empowers engineers and researchers to take full advantage of the scalability and cross-platform capabilities of TensorFlow 2: you can run Keras on TPU or on large clusters of GPUs, and you can export your Keras models to run in the browser or on a mobile device.
The core data structures of Keras are layers and models. The simplest type of model is the Sequential
model, a linear stack of layers. For more complex architectures, you should use the Keras functional API, which allows you to build arbitrary graphs of layers or write models entirely from scratch via subclassing.
Here is the Sequential
model:
from tensorflow.keras.models import Sequential
model = Sequential()
Stacking layers is as easy as .add()
:
from tensorflow.keras.layers import Dense
model.add(Dense(units=64, activation='relu'))
model.add(Dense(units=10, activation='softmax'))
Once your model looks good, configure its learning process with .compile()
:
model.compile(loss='categorical_crossentropy',
optimizer='sgd',
metrics=['accuracy'])
If you need to, you can further configure your optimizer. The Keras philosophy is to keep simple things simple, while allowing the user to be fully in control when they need to (the ultimate control being the easy extensibility of the source code via subclassing).
model.compile(loss=tf.keras.losses.categorical_crossentropy,
optimizer=tf.keras.optimizers.SGD(
learning_rate=0.01, momentum=0.9, nesterov=True))
You can now iterate on your training data in batches:
# x_train and y_train are Numpy arrays.
model.fit(x_train, y_train, epochs=5, batch_size=32)
Evaluate your test loss and metrics in one line:
loss_and_metrics = model.evaluate(x_test, y_test, batch_size=128)
Or generate predictions on new data:
classes = model.predict(x_test, batch_size=128)
What you just saw is the most elementary way to use Keras.
However, Keras is also a highly-flexible framework suitable to iterate on state-of-the-art research ideas. Keras follows the principle of progressive disclosure of complexity: it makes it easy to get started, yet it makes it possible to handle arbitrarily advanced use cases, only requiring incremental learning at each step.
In much the same way that you were able to train & evaluate a simple neural network above in a few lines, you can use Keras to quickly develop new training procedures or exotic model architectures. Here's a low-level training loop example, combining Keras functionality with the TensorFlow GradientTape
:
import tensorflow as tf
# Prepare an optimizer.
optimizer = tf.keras.optimizers.Adam()
# Prepare a loss function.
loss_fn = tf.keras.losses.kl_divergence
# Iterate over the batches of a dataset.
for inputs, targets in dataset:
# Open a GradientTape.
with tf.GradientTape() as tape:
# Forward pass.
predictions = model(inputs)
# Compute the loss value for this batch.
loss_value = loss_fn(targets, predictions)
# Get gradients of loss wrt the weights.
gradients = tape.gradient(loss_value, model.trainable_weights)
# Update the weights of the model.
optimizer.apply_gradients(zip(gradients, model.trainable_weights))
For more in-depth tutorials about Keras, you can check out:
Keras comes packaged with TensorFlow 2 as tensorflow.keras
. To start using Keras, simply install TensorFlow 2. You can then import Keras as follows:
from tensorflow import keras
Keras has nightly releases (keras-nightly
on PyPI) and stable releases (keras
on PyPI). The nightly Keras releases are usually compatible with the corresponding version of the tf-nightly
releases (e.g. keras-nightly==2.7.0.dev2021100607
should be used with tf-nightly==2.7.0.dev2021100607
). We don't maintain backward compatibility for nightly releases. For stable releases, each Keras version maps to a specific stable version of TensorFlow.
The table below shows the compatibility version mapping between TensorFlow versions and Keras versions.
All the release branches can be found on GitHub.
All the release binaries can be found on Pypi.
You can ask questions and join the development discussion:
You can also post bug reports and feature requests (only) in GitHub issues.
We welcome contributions! Before opening a PR, please read our contributor guide, and the API design guideline.
Read the documentation at keras.io.
Author: Keras-team
Source Code: https://github.com/keras-team/keras
License: Apache-2.0 license
#machinelearning #python #datascience #deeplearning #tensorflow
1685938080
State-of-the-art Machine Learning for JAX, PyTorch and TensorFlow
🤗 Transformers provides thousands of pretrained models to perform tasks on different modalities such as text, vision, and audio.
These models can be applied on:
Transformer models can also perform tasks on several modalities combined, such as table question answering, optical character recognition, information extraction from scanned documents, video classification, and visual question answering.
🤗 Transformers provides APIs to quickly download and use those pretrained models on a given text, fine-tune them on your own datasets and then share them with the community on our model hub. At the same time, each python module defining an architecture is fully standalone and can be modified to enable quick research experiments.
🤗 Transformers is backed by the three most popular deep learning libraries — Jax, PyTorch and TensorFlow — with a seamless integration between them. It's straightforward to train your models with one before loading them for inference with the other.
To immediately use a model on a given input (text, image, audio, ...), we provide the pipeline
API. Pipelines group together a pretrained model with the preprocessing that was used during that model's training. Here is how to quickly use a pipeline to classify positive versus negative texts:
>>> from transformers import pipeline
# Allocate a pipeline for sentiment-analysis
>>> classifier = pipeline('sentiment-analysis')
>>> classifier('We are very happy to introduce pipeline to the transformers repository.')
[{'label': 'POSITIVE', 'score': 0.9996980428695679}]
The second line of code downloads and caches the pretrained model used by the pipeline, while the third evaluates it on the given text. Here the answer is "positive" with a confidence of 99.97%.
Many tasks have a pre-trained pipeline
ready to go, in NLP but also in computer vision and speech. For example, we can easily extract detected objects in an image:
>>> import requests
>>> from PIL import Image
>>> from transformers import pipeline
# Download an image with cute cats
>>> url = "https://huggingface.co/datasets/huggingface/documentation-images/resolve/main/coco_sample.png"
>>> image_data = requests.get(url, stream=True).raw
>>> image = Image.open(image_data)
# Allocate a pipeline for object detection
>>> object_detector = pipeline('object-detection')
>>> object_detector(image)
[{'score': 0.9982201457023621,
'label': 'remote',
'box': {'xmin': 40, 'ymin': 70, 'xmax': 175, 'ymax': 117}},
{'score': 0.9960021376609802,
'label': 'remote',
'box': {'xmin': 333, 'ymin': 72, 'xmax': 368, 'ymax': 187}},
{'score': 0.9954745173454285,
'label': 'couch',
'box': {'xmin': 0, 'ymin': 1, 'xmax': 639, 'ymax': 473}},
{'score': 0.9988006353378296,
'label': 'cat',
'box': {'xmin': 13, 'ymin': 52, 'xmax': 314, 'ymax': 470}},
{'score': 0.9986783862113953,
'label': 'cat',
'box': {'xmin': 345, 'ymin': 23, 'xmax': 640, 'ymax': 368}}]
Here we get a list of objects detected in the image, with a box surrounding the object and a confidence score. Here is the original image on the left, with the predictions displayed on the right:
You can learn more about the tasks supported by the pipeline
API in this tutorial.
In addition to pipeline
, to download and use any of the pretrained models on your given task, all it takes is three lines of code. Here is the PyTorch version:
>>> from transformers import AutoTokenizer, AutoModel
>>> tokenizer = AutoTokenizer.from_pretrained("bert-base-uncased")
>>> model = AutoModel.from_pretrained("bert-base-uncased")
>>> inputs = tokenizer("Hello world!", return_tensors="pt")
>>> outputs = model(**inputs)
And here is the equivalent code for TensorFlow:
>>> from transformers import AutoTokenizer, TFAutoModel
>>> tokenizer = AutoTokenizer.from_pretrained("bert-base-uncased")
>>> model = TFAutoModel.from_pretrained("bert-base-uncased")
>>> inputs = tokenizer("Hello world!", return_tensors="tf")
>>> outputs = model(**inputs)
The tokenizer is responsible for all the preprocessing the pretrained model expects, and can be called directly on a single string (as in the above examples) or a list. It will output a dictionary that you can use in downstream code or simply directly pass to your model using the ** argument unpacking operator.
The model itself is a regular Pytorch nn.Module
or a TensorFlow tf.keras.Model
(depending on your backend) which you can use as usual. This tutorial explains how to integrate such a model into a classic PyTorch or TensorFlow training loop, or how to use our Trainer
API to quickly fine-tune on a new dataset.
Easy-to-use state-of-the-art models:
Lower compute costs, smaller carbon footprint:
Choose the right framework for every part of a model's lifetime:
Easily customize a model or an example to your needs:
This repository is tested on Python 3.6+, Flax 0.3.2+, PyTorch 1.3.1+ and TensorFlow 2.3+.
You should install 🤗 Transformers in a virtual environment. If you're unfamiliar with Python virtual environments, check out the user guide.
First, create a virtual environment with the version of Python you're going to use and activate it.
Then, you will need to install at least one of Flax, PyTorch or TensorFlow. Please refer to TensorFlow installation page, PyTorch installation page and/or Flax and Jax installation pages regarding the specific installation command for your platform.
When one of those backends has been installed, 🤗 Transformers can be installed using pip as follows:
pip install transformers
If you'd like to play with the examples or need the bleeding edge of the code and can't wait for a new release, you must install the library from source.
Since Transformers version v4.0.0, we now have a conda channel: huggingface
.
🤗 Transformers can be installed using conda as follows:
conda install -c huggingface transformers
Follow the installation pages of Flax, PyTorch or TensorFlow to see how to install them with conda.
NOTE: On Windows, you may be prompted to activate Developer Mode in order to benefit from caching. If this is not an option for you, please let us know in this issue.
All the model checkpoints provided by 🤗 Transformers are seamlessly integrated from the huggingface.co model hub where they are uploaded directly by users and organizations.
Current number of checkpoints:
🤗 Transformers currently provides the following architectures (see here for a high-level summary of each them):
templates
folder of the repository. Be sure to check the contributing guidelines and contact the maintainers or open an issue to collect feedbacks before starting your PR.To check if each model has an implementation in Flax, PyTorch or TensorFlow, or has an associated tokenizer backed by the 🤗 Tokenizers library, refer to this table.
These implementations have been tested on several datasets (see the example scripts) and should match the performance of the original implementations. You can find more details on performance in the Examples section of the documentation.
Section | Description |
---|---|
Documentation | Full API documentation and tutorials |
Task summary | Tasks supported by 🤗 Transformers |
Preprocessing tutorial | Using the Tokenizer class to prepare data for the models |
Training and fine-tuning | Using the models provided by 🤗 Transformers in a PyTorch/TensorFlow training loop and the Trainer API |
Quick tour: Fine-tuning/usage scripts | Example scripts for fine-tuning models on a wide range of tasks |
Model sharing and uploading | Upload and share your fine-tuned models with the community |
Migration | Migrate to 🤗 Transformers from pytorch-transformers or pytorch-pretrained-bert |
We now have a paper you can cite for the 🤗 Transformers library:
@inproceedings{wolf-etal-2020-transformers,
title = "Transformers: State-of-the-Art Natural Language Processing",
author = "Thomas Wolf and Lysandre Debut and Victor Sanh and Julien Chaumond and Clement Delangue and Anthony Moi and Pierric Cistac and Tim Rault and Rémi Louf and Morgan Funtowicz and Joe Davison and Sam Shleifer and Patrick von Platen and Clara Ma and Yacine Jernite and Julien Plu and Canwen Xu and Teven Le Scao and Sylvain Gugger and Mariama Drame and Quentin Lhoest and Alexander M. Rush",
booktitle = "Proceedings of the 2020 Conference on Empirical Methods in Natural Language Processing: System Demonstrations",
month = oct,
year = "2020",
address = "Online",
publisher = "Association for Computational Linguistics",
url = "https://www.aclweb.org/anthology/2020.emnlp-demos.6",
pages = "38--45"
}
You can test most of our models directly on their pages from the model hub. We also offer private model hosting, versioning, & an inference API for public and private models.
Here are a few examples:
In Natural Language Processing:
In Computer Vision:
In Audio:
In Multimodal tasks:
Write With Transformer, built by the Hugging Face team, is the official demo of this repo’s text generation capabilities.
Transformers is more than a toolkit to use pretrained models: it's a community of projects built around it and the Hugging Face Hub. We want Transformers to enable developers, researchers, students, professors, engineers, and anyone else to build their dream projects.
In order to celebrate the 100,000 stars of transformers, we have decided to put the spotlight on the community, and we have created the awesome-transformers page which lists 100 incredible projects built in the vicinity of transformers.
If you own or use a project that you believe should be part of the list, please open a PR to add it!
English | 简体中文 | 繁體中文 | 한국어 | Español | 日本語 | हिन्दी
Author: huggingface
Source Code: https://github.com/huggingface/transformers
License: Apache-2.0 license
1685934009
TensorFlow is an end-to-end open source platform for machine learning. It has a comprehensive, flexible ecosystem of tools, libraries, and community resources that lets researchers push the state-of-the-art in ML and developers easily build and deploy ML-powered applications.
TensorFlow was originally developed by researchers and engineers working on the Google Brain team within Google's Machine Intelligence Research organization to conduct machine learning and deep neural networks research. The system is general enough to be applicable in a wide variety of other domains, as well.
TensorFlow provides stable Python and C++ APIs, as well as non-guaranteed backward compatible API for other languages.
Keep up-to-date with release announcements and security updates by subscribing to announce@tensorflow.org. See all the mailing lists.
See the TensorFlow install guide for the pip package, to enable GPU support, use a Docker container, and build from source.
To install the current release, which includes support for CUDA-enabled GPU cards (Ubuntu and Windows):
$ pip install tensorflow
Other devices (DirectX and MacOS-metal) are supported using Device plugins.
A smaller CPU-only package is also available:
$ pip install tensorflow-cpu
To update TensorFlow to the latest version, add --upgrade
flag to the above commands.
Nightly binaries are available for testing using the tf-nightly and tf-nightly-cpu packages on PyPi.
$ python
>>> import tensorflow as tf
>>> tf.add(1, 2).numpy()
3
>>> hello = tf.constant('Hello, TensorFlow!')
>>> hello.numpy()
b'Hello, TensorFlow!'
For more examples, see the TensorFlow tutorials.
If you want to contribute to TensorFlow, be sure to review the contribution guidelines. This project adheres to TensorFlow's code of conduct. By participating, you are expected to uphold this code.
We use GitHub issues for tracking requests and bugs, please see TensorFlow Forum for general questions and discussion, and please direct specific questions to Stack Overflow.
The TensorFlow project strives to abide by generally accepted best practices in open-source software development.
Follow these steps to patch a specific version of TensorFlow, for example, to apply fixes to bugs or security vulnerabilities:
r2.8
for version 2.8.You can find more community-supported platforms and configurations in the TensorFlow SIG Build community builds table.
Build Type | Status | Artifacts |
---|---|---|
Linux CPU | PyPI | |
Linux GPU | PyPI | |
Linux XLA | TBA | |
macOS | PyPI | |
Windows CPU | PyPI | |
Windows GPU | PyPI | |
Android | Download | |
Raspberry Pi 0 and 1 | Py3 | |
Raspberry Pi 2 and 3 | Py3 | |
Libtensorflow MacOS CPU | Status Temporarily Unavailable | Nightly Binary Official GCS |
Libtensorflow Linux CPU | Status Temporarily Unavailable | Nightly Binary Official GCS |
Libtensorflow Linux GPU | Status Temporarily Unavailable | Nightly Binary Official GCS |
Libtensorflow Windows CPU | Status Temporarily Unavailable | Nightly Binary Official GCS |
Libtensorflow Windows GPU | Status Temporarily Unavailable | Nightly Binary Official GCS |
Learn more about the TensorFlow community and how to contribute.
Author: Tensorflow
Source Code: https://github.com/tensorflow/tensorflow
License: Apache-2.0 license
1629223140
TinyML reduces the complexity of adding AI to the edge, enabling new applications where streaming data back to the cloud is prohibitive. Sure, we can detect audio and visual wake words or analyze sensor data for predictive maintenance on a desktop computer. TinyML allows us to take advantage of these advances in hardware to create all sorts of novel applications that simply were not possible before. At SensiML our goal is to empower developers to rapidly add AI to their own edge devices, allowing their applications to autonomously transform raw sensor data into meaningful insight.
We have taken years of lessons learned in creating products that rely on edge optimized machine learning and distilled that knowledge into a single framework, the SensiML Analytics Toolkit, which provides an end-to-end development platform spanning data collection, labeling, algorithm development, firmware generation, and testing. Building a TinyML application touches on skill sets ranging from hardware engineering, embedded programming, software engineering, machine learning, data science and domain expertise about the application you are building
1629215645
If so, we have a new set of courses to get you going. The new specialization builds on the foundational knowledge taught in the popular specialization, DeepLearning. The new MLOps specialization kicks off with an introductory course taught by Andrew Ng, followed by courses taught by Robert Crowe and Laurence Moroney that dive into the details of getting your models out to users.
1629208140
Google came out with a solution and called it TensorFlow. It is an open-source machine learning framework used to tackle and implement some tricky large-scale machine learning and neural networking models to make the job of predicting future results easier. ML models that use multi-layer neural networks are called deep learning. It was developed to boost Google’s deep neural network research and can now be seen in the advanced Google search suggestions.
Some of the changes include added support for deep learning in computer graphics and discontinuation of support for Python 2