PyTorch/XLA: Enabling PyTorch on XLA Devices (e.g. Google TPU)

PyTorch/XLA

PyTorch/XLA is a Python package that uses the XLA deep learning compiler to connect the PyTorch deep learning framework and Cloud TPUs. You can try it right now, for free, on a single Cloud TPU with Google Colab, and use it in production and on Cloud TPU Pods with Google Cloud.

Take a look at one of our Colab notebooks to quickly try different PyTorch networks running on Cloud TPUs and learn how to use Cloud TPUs as PyTorch devices:

The rest of this README covers:

Additional information on PyTorch/XLA, including a description of its semantics and functions, is available at PyTorch.org.

User Guide & Best Practices

Our comprehensive user guides are available at:

Documentation for the latest release

Documentation for master branch

See the API Guide for best practices when writing networks that run on XLA devices(TPU, GPU, CPU and...)

Running PyTorch/XLA on Cloud TPU and GPU


Running on a Single Cloud TPU VM

Google Cloud offers TPU VMs for more transparent and easier access to the TPU hardware. This is our recommended way of running PyTorch/XLA on Cloud TPU. Please check out our Cloud TPU VM User Guide. To learn more about the Cloud TPU System Architecture, please check out this doc.


How to Run on TPU VM Pods (distributed training)

If a single TPU VM does not suit your requirement, you can consider using TPU Pod. TPU Pod is a collection of TPU devices connected by dedicated high-speed network interfaces. Please checkout our Cloud TPU VM Pod User Guide.

Available docker images and wheels

Docker

The following pre-built docker images are available. For running dockers, check this doc for TPUVM and this doc for GPU.

VersionCloud TPU VMs Docker
2.0gcr.io/tpu-pytorch/xla:r2.0_3.8_tpuvm
1.13gcr.io/tpu-pytorch/xla:r1.13_3.8_tpuvm
nightly python 3.10us-central1-docker.pkg.dev/tpu-pytorch-releases/docker/xla:nightly_3.10_tpuvm
nightly python 3.8us-central1-docker.pkg.dev/tpu-pytorch-releases/docker/xla:nightly_3.8_tpuvm
nightly python 3.10(>= 2023/04/25)us-central1-docker.pkg.dev/tpu-pytorch-releases/docker/xla:nightly_3.10_tpuvm_YYYYMMDD
nightly python 3.8(>= 2023/04/25)us-central1-docker.pkg.dev/tpu-pytorch-releases/docker/xla:nightly_3.8_tpuvm_YYYYMMDD
nightly at date(< 2023/04/25)gcr.io/tpu-pytorch/xla:nightly_3.8_tpuvm_YYYYMMDD


 

VersionGPU CUDA 11.8 + Python 3.8 Docker
2.0gcr.io/tpu-pytorch/xla:r2.0_3.8_cuda_11.8
nightlyus-central1-docker.pkg.dev/tpu-pytorch-releases/docker/xla:nightly_3.8_cuda_11.8
nightly at date(>=2023/04/25)us-central1-docker.pkg.dev/tpu-pytorch-releases/docker/xla:nightly_3.8_cuda_11.8_YYYYMMDD
nightly at date(<2023/04/25)gcr.io/tpu-pytorch/xla:nightly_3.8_cuda_11.8_YYYYMMDD


 

VersionGPU CUDA 11.7 + Python 3.8 Docker
2.0gcr.io/tpu-pytorch/xla:r2.0_3.8_cuda_11.7
nightlyus-central1-docker.pkg.dev/tpu-pytorch-releases/docker/xla:nightly_3.8_cuda_11.7
nightly at date(>=2023/04/25)us-central1-docker.pkg.dev/tpu-pytorch-releases/docker/xla:nightly_3.8_cuda_11.7_YYYYMMDD
nightly at date(<2023/04/25)gcr.io/tpu-pytorch/xla:nightly_3.8_cuda_11.7_YYYYMMDD


 

VersionGPU CUDA 11.2 + Python 3.8 Docker
1.13gcr.io/tpu-pytorch/xla:r1.13_3.8_cuda_11.2


 

VersionGPU CUDA 11.2 + Python 3.7 Docker
1.13gcr.io/tpu-pytorch/xla:r1.13_3.7_cuda_11.2
1.12gcr.io/tpu-pytorch/xla:r1.12_3.7_cuda_11.2

To run on compute instances with GPUs.

Wheel

VersionCloud TPU VMs Wheel
2.0https://storage.googleapis.com/tpu-pytorch/wheels/tpuvm/torch_xla-2.0-cp38-cp38-linux_x86_64.whl
1.13https://storage.googleapis.com/tpu-pytorch/wheels/tpuvm/torch_xla-1.13-cp38-cp38-linux_x86_64.whl
1.12https://storage.googleapis.com/tpu-pytorch/wheels/tpuvm/torch_xla-1.12-cp38-cp38-linux_x86_64.whl
1.11https://storage.googleapis.com/tpu-pytorch/wheels/tpuvm/torch_xla-1.11-cp38-cp38-linux_x86_64.whl
1.10https://storage.googleapis.com/tpu-pytorch/wheels/tpuvm/torch_xla-1.10-cp38-cp38-linux_x86_64.whl
nightly <= 2023/04/25https://storage.googleapis.com/tpu-pytorch/wheels/tpuvm/torch_xla-nightly-cp38-cp38-linux_x86_64.whl
nightly >= 2023/04/25https://storage.googleapis.com/pytorch-xla-releases/wheels/tpuvm/torch_xla-nightly-cp38-cp38-linux_x86_64.whl


 

Note: For TPU Pod customers using XRT (our legacy runtime), we have custom wheels for torch, torchvision, and torch_xla at https://storage.googleapis.com/tpu-pytorch/wheels/xrt.

PackageCloud TPU VMs Wheel (XRT on Pod, Legacy Only)
torch_xlahttps://storage.googleapis.com/tpu-pytorch/wheels/xrt/torch_xla-2.0-cp38-cp38-linux_x86_64.whl
torchhttps://storage.googleapis.com/tpu-pytorch/wheels/xrt/torch-2.0-cp38-cp38-linux_x86_64.whl
torchvisionhttps://storage.googleapis.com/tpu-pytorch/wheels/xrt/torchvision-2.0-cp38-cp38-linux_x86_64.whl


 

VersionGPU Wheel + Python 3.8
2.0 + CUDA 11.8https://storage.googleapis.com/tpu-pytorch/wheels/cuda/118/torch_xla-2.0-cp38-cp38-linux_x86_64.whl
2.0 + CUDA 11.7https://storage.googleapis.com/tpu-pytorch/wheels/cuda/117/torch_xla-2.0-cp38-cp38-linux_x86_64.whl
1.13https://storage.googleapis.com/tpu-pytorch/wheels/cuda/112/torch_xla-1.13-cp38-cp38-linux_x86_64.whl
nightly + CUDA 11.7 <= 2023/04/25https://storage.googleapis.com/tpu-pytorch/wheels/cuda/117/torch_xla-nightly-cp38-cp38-linux_x86_64.whl
nightly + CUDA 11.7 >= 2023/04/25https://storage.googleapis.com/pytorch-xla-releases/wheels/cuda/11.7/torch_xla-nightly-cp38-cp38-linux_x86_64.whl
nightly + CUDA 11.8 <= 2023/04/25https://storage.googleapis.com/tpu-pytorch/wheels/cuda/118/torch_xla-nightly-cp38-cp38-linux_x86_64.whl
nightly + CUDA 11.8 >= 2023/04/25https://storage.googleapis.com/pytorch-xla-releases/wheels/cuda/11.8/torch_xla-nightly-cp38-cp38-linux_x86_64.whl


 

VersionGPU Wheel + Python 3.7
1.13https://storage.googleapis.com/tpu-pytorch/wheels/cuda/112/torch_xla-1.13-cp37-cp37m-linux_x86_64.whl
1.12https://storage.googleapis.com/tpu-pytorch/wheels/cuda/112/torch_xla-1.12-cp37-cp37m-linux_x86_64.whl
1.11https://storage.googleapis.com/tpu-pytorch/wheels/cuda/112/torch_xla-1.11-cp37-cp37m-linux_x86_64.whl
nightlyhttps://storage.googleapis.com/tpu-pytorch/wheels/cuda/112/torch_xla-nightly-cp37-cp37-linux_x86_64.whl


 

VersionColab TPU Wheel
2.0https://storage.googleapis.com/tpu-pytorch/wheels/colab/torch_xla-2.0-cp310-cp310-linux_x86_64.whl

You can also add +yyyymmdd after torch_xla-nightly to get the nightly wheel of a specified date. To get the companion pytorch and torchvision nightly wheel, replace the torch_xla with torch or torchvision on above wheel links.

Installing libtpu

For PyTorch/XLA release r2.0 and older and when developing PyTorch/XLA, install the libtpu pip package with the following command:

pip3 install torch_xla[tpuvm]

This is only required on Cloud TPU VMs.

Performance Profiling and Auto-Metrics Analysis

With PyTorch/XLA we provide a set of performance profiling tooling and auto-metrics analysis which you can check the following resources:

Troubleshooting

If PyTorch/XLA isn't performing as expected, see the troubleshooting guide, which has suggestions for debugging and optimizing your network(s).

Providing Feedback

The PyTorch/XLA team is always happy to hear from users and OSS contributors! The best way to reach out is by filing an issue on this Github. Questions, bug reports, feature requests, build issues, etc. are all welcome!

Contributing

See the contribution guide.

Disclaimer

This repository is jointly operated and maintained by Google, Facebook and a number of individual contributors listed in the CONTRIBUTORS file. For questions directed at Facebook, please send an email to opensource@fb.com. For questions directed at Google, please send an email to pytorch-xla@googlegroups.com. For all other questions, please open up an issue in this repository here.

Additional Reads

You can find additional useful reading materials in


Download Details:

Author: pytorch
Source Code: https://github.com/pytorch/xla 
License: View license

#machinelearning #python #pytorch #deeplearning 

PyTorch/XLA: Enabling PyTorch on XLA Devices (e.g. Google TPU)
3.15 GEEK