Swin Transformer: Official Implementation

This repo is the official implementation of "Swin Transformer: Hierarchical Vision Transformer using Shifted Windows" as well as the follow-ups. It currently includes code and models for the following tasks:

Image Classification: Included in this repo. See get_started.md for a quick start.

Object Detection and Instance Segmentation: See Swin Transformer for Object Detection.

Semantic Segmentation: See Swin Transformer for Semantic Segmentation.

Video Action Recognition: See Video Swin Transformer.

Semi-Supervised Object Detection: See Soft Teacher.

SSL: Contrasitive Learning: See Transformer-SSL.

SSL: Masked Image Modeling: See get_started.md#simmim-support.

Mixture-of-Experts: See get_started for more instructions.

Feature-Distillation: See Feature-Distillation.

Updates

12/29/2022

  1. Nvidia's FasterTransformer now supports Swin Transformer V2 inference, which have significant speed improvements on T4 and A100 GPUs.

11/30/2022

  1. Models and codes of Feature Distillation are released. Please refer to Feature-Distillation for details, and the checkpoints (FD-EsViT-Swin-B, FD-DeiT-ViT-B, FD-DINO-ViT-B, FD-CLIP-ViT-B, FD-CLIP-ViT-L).

09/24/2022

Merged SimMIM, which is a Masked Image Modeling based pre-training approach applicable to Swin and SwinV2 (and also applicable for ViT and ResNet). Please refer to get started with SimMIM to play with SimMIM pre-training.

Released a series of Swin and SwinV2 models pre-trained using the SimMIM approach (see MODELHUB for SimMIM), with model size ranging from SwinV2-Small-50M to SwinV2-giant-1B, data size ranging from ImageNet-1K-10% to ImageNet-22K, and iterations from 125k to 500k. You may leverage these models to study the properties of MIM methods. Please look into the data scaling paper for more details.

07/09/2022

News:

  1. SwinV2-G achieves 61.4 mIoU on ADE20K semantic segmentation (+1.5 mIoU over the previous SwinV2-G model), using an additional feature distillation (FD) approach, setting a new recrod on this benchmark. FD is an approach that can generally improve the fine-tuning performance of various pre-trained models, including DeiT, DINO, and CLIP. Particularly, it improves CLIP pre-trained ViT-L by +1.6% to reach 89.0% on ImageNet-1K image classification, which is the most accurate ViT-L model.
  2. Merged a PR from Nvidia that links to faster Swin Transformer inference that have significant speed improvements on T4 and A100 GPUs.
  3. Merged a PR from Nvidia that enables an option to use pure FP16 (Apex O2) in training, while almost maintaining the accuracy.

06/03/2022

  1. Added Swin-MoE, the Mixture-of-Experts variant of Swin Transformer implemented using Tutel (an optimized Mixture-of-Experts implementation). Swin-MoE is introduced in the TuTel paper.

05/12/2022

  1. Pretrained models of Swin Transformer V2 on ImageNet-1K and ImageNet-22K are released.
  2. ImageNet-22K pretrained models for Swin-V1-Tiny and Swin-V2-Small are released.

03/02/2022

  1. Swin Transformer V2 and SimMIM got accepted by CVPR 2022. SimMIM is a self-supervised pre-training approach based on masked image modeling, a key technique that works out the 3-billion-parameter Swin V2 model using 40x less labelled data than that of previous billion-scale models based on JFT-3B.

02/09/2022

  1. Integrated into Huggingface Spaces 🤗 using Gradio. Try out the Web Demo 

10/12/2021

  1. Swin Transformer received ICCV 2021 best paper award (Marr Prize).

08/09/2021

  1. Soft Teacher will appear at ICCV2021. The code will be released at GitHub Repo. Soft Teacher is an end-to-end semi-supervisd object detection method, achieving a new record on the COCO test-dev: 61.3 box AP and 53.0 mask AP.

07/03/2021

  1. Add Swin MLP, which is an adaption of Swin Transformer by replacing all multi-head self-attention (MHSA) blocks by MLP layers (more precisely it is a group linear layer). The shifted window configuration can also significantly improve the performance of vanilla MLP architectures.

06/25/2021

  1. Video Swin Transformer is released at Video-Swin-Transformer. Video Swin Transformer achieves state-of-the-art accuracy on a broad range of video recognition benchmarks, including action recognition (84.9 top-1 accuracy on Kinetics-400 and 86.1 top-1 accuracy on Kinetics-600 with ~20x less pre-training data and ~3x smaller model size) and temporal modeling (69.6 top-1 accuracy on Something-Something v2).

05/12/2021

  1. Used as a backbone for Self-Supervised Learning: Transformer-SSL

Using Swin-Transformer as the backbone for self-supervised learning enables us to evaluate the transferring performance of the learnt representations on down-stream tasks, which is missing in previous works due to the use of ViT/DeiT, which has not been well tamed for down-stream tasks.

04/12/2021

Initial commits:

  1. Pretrained models on ImageNet-1K (Swin-T-IN1K, Swin-S-IN1K, Swin-B-IN1K) and ImageNet-22K (Swin-B-IN22K, Swin-L-IN22K) are provided.
  2. The supported code and models for ImageNet-1K image classification, COCO object detection and ADE20K semantic segmentation are provided.
  3. The cuda kernel implementation for the local relation layer is provided in branch LR-Net.

Introduction

Swin Transformer (the name Swin stands for Shifted window) is initially described in arxiv, which capably serves as a general-purpose backbone for computer vision. It is basically a hierarchical Transformer whose representation is computed with shifted windows. The shifted windowing scheme brings greater efficiency by limiting self-attention computation to non-overlapping local windows while also allowing for cross-window connection.

Swin Transformer achieves strong performance on COCO object detection (58.7 box AP and 51.1 mask AP on test-dev) and ADE20K semantic segmentation (53.5 mIoU on val), surpassing previous models by a large margin.

teaser

Main Results on ImageNet with Pretrained Models

ImageNet-1K and ImageNet-22K Pretrained Swin-V1 Models

namepretrainresolutionacc@1acc@5#paramsFLOPsFPS22K model1K model
Swin-TImageNet-1K224x22481.295.528M4.5G755-github/baidu/config/log
Swin-SImageNet-1K224x22483.296.250M8.7G437-github/baidu/config/log
Swin-BImageNet-1K224x22483.596.588M15.4G278-github/baidu/config/log
Swin-BImageNet-1K384x38484.597.088M47.1G85-github/baidu/config
Swin-TImageNet-22K224x22480.996.028M4.5G755github/baidu/configgithub/baidu/config
Swin-SImageNet-22K224x22483.297.050M8.7G437github/baidu/configgithub/baidu/config
Swin-BImageNet-22K224x22485.297.588M15.4G278github/baidu/configgithub/baidu/config
Swin-BImageNet-22K384x38486.498.088M47.1G85github/baidugithub/baidu/config
Swin-LImageNet-22K224x22486.397.9197M34.5G141github/baidu/configgithub/baidu/config
Swin-LImageNet-22K384x38487.398.2197M103.9G42github/baidugithub/baidu/config

ImageNet-1K and ImageNet-22K Pretrained Swin-V2 Models

namepretrainresolutionwindowacc@1acc@5#paramsFLOPsFPS22K model1K model
SwinV2-TImageNet-1K256x2568x881.895.928M5.9G572-github/baidu/config
SwinV2-SImageNet-1K256x2568x883.796.650M11.5G327-github/baidu/config
SwinV2-BImageNet-1K256x2568x884.296.988M20.3G217-github/baidu/config
SwinV2-TImageNet-1K256x25616x1682.896.228M6.6G437-github/baidu/config
SwinV2-SImageNet-1K256x25616x1684.196.850M12.6G257-github/baidu/config
SwinV2-BImageNet-1K256x25616x1684.697.088M21.8G174-github/baidu/config
SwinV2-B*ImageNet-22K256x25616x1686.297.988M21.8G174github/baidu/configgithub/baidu/config
SwinV2-B*ImageNet-22K384x38424x2487.198.288M54.7G57github/baidu/configgithub/baidu/config
SwinV2-L*ImageNet-22K256x25616x1686.998.0197M47.5G95github/baidu/configgithub/baidu/config
SwinV2-L*ImageNet-22K384x38424x2487.698.3197M115.4G33github/baidu/configgithub/baidu/config

Note:

  • SwinV2-B* (SwinV2-L*) with input resolution of 256x256 and 384x384 both fine-tuned from the same pre-training model using a smaller input resolution of 192x192.
  • SwinV2-B* (384x384) achieves 78.08 acc@1 on ImageNet-1K-V2 while SwinV2-L* (384x384) achieves 78.31.

ImageNet-1K Pretrained Swin MLP Models

namepretrainresolutionacc@1acc@5#paramsFLOPsFPS1K model
Mixer-B/16ImageNet-1K224x22476.4-59M12.7G-official repo
ResMLP-S24ImageNet-1K224x22479.4-30M6.0G715timm
ResMLP-B24ImageNet-1K224x22481.0-116M23.0G231timm
Swin-T/C24ImageNet-1K256x25681.695.728M5.9G563github/baidu/config
SwinMLP-T/C24ImageNet-1K256x25679.494.620M4.0G807github/baidu/config
SwinMLP-T/C12ImageNet-1K256x25679.694.721M4.0G792github/baidu/config
SwinMLP-T/C6ImageNet-1K256x25679.794.923M4.0G766github/baidu/config
SwinMLP-BImageNet-1K224x22481.395.361M10.4G409github/baidu/config

Note: access code for baidu is swin. C24 means each head has 24 channels.

ImageNet-22K Pretrained Swin-MoE Models

  • Please refer to get_started for instructions on running Swin-MoE.
  • Pretrained models for Swin-MoE can be found in MODEL HUB

Main Results on Downstream Tasks

COCO Object Detection (2017 val)

BackboneMethodpretrainLr Schdbox mAPmask mAP#paramsFLOPs
Swin-TMask R-CNNImageNet-1K3x46.041.648M267G
Swin-SMask R-CNNImageNet-1K3x48.543.369M359G
Swin-TCascade Mask R-CNNImageNet-1K3x50.443.786M745G
Swin-SCascade Mask R-CNNImageNet-1K3x51.945.0107M838G
Swin-BCascade Mask R-CNNImageNet-1K3x51.945.0145M982G
Swin-TRepPoints V2ImageNet-1K3x50.0-45M283G
Swin-TMask RepPoints V2ImageNet-1K3x50.343.647M292G
Swin-BHTC++ImageNet-22K6x56.449.1160M1043G
Swin-LHTC++ImageNet-22K3x57.149.5284M1470G
Swin-LHTC++*ImageNet-22K3x58.050.4284M-

Note: * indicates multi-scale testing.

ADE20K Semantic Segmentation (val)

BackboneMethodpretrainCrop SizeLr SchdmIoUmIoU (ms+flip)#paramsFLOPs
Swin-TUPerNetImageNet-1K512x512160K44.5145.8160M945G
Swin-SUperNetImageNet-1K512x512160K47.6449.4781M1038G
Swin-BUperNetImageNet-1K512x512160K48.1349.72121M1188G
Swin-BUPerNetImageNet-22K640x640160K50.0451.66121M1841G
Swin-LUperNetImageNet-22K640x640160K52.0553.53234M3230G

Citing Swin Transformer

@inproceedings{liu2021Swin,
  title={Swin Transformer: Hierarchical Vision Transformer using Shifted Windows},
  author={Liu, Ze and Lin, Yutong and Cao, Yue and Hu, Han and Wei, Yixuan and Zhang, Zheng and Lin, Stephen and Guo, Baining},
  booktitle={Proceedings of the IEEE/CVF International Conference on Computer Vision (ICCV)},
  year={2021}
}

Citing Local Relation Networks (the first full-attention visual backbone)

@inproceedings{hu2019local,
  title={Local Relation Networks for Image Recognition},
  author={Hu, Han and Zhang, Zheng and Xie, Zhenda and Lin, Stephen},
  booktitle={Proceedings of the IEEE/CVF International Conference on Computer Vision (ICCV)},
  pages={3464--3473},
  year={2019}
}

Citing Swin Transformer V2

@inproceedings{liu2021swinv2,
  title={Swin Transformer V2: Scaling Up Capacity and Resolution}, 
  author={Ze Liu and Han Hu and Yutong Lin and Zhuliang Yao and Zhenda Xie and Yixuan Wei and Jia Ning and Yue Cao and Zheng Zhang and Li Dong and Furu Wei and Baining Guo},
  booktitle={International Conference on Computer Vision and Pattern Recognition (CVPR)},
  year={2022}
}

Citing SimMIM (a self-supervised approach that enables SwinV2-G)

@inproceedings{xie2021simmim,
  title={SimMIM: A Simple Framework for Masked Image Modeling},
  author={Xie, Zhenda and Zhang, Zheng and Cao, Yue and Lin, Yutong and Bao, Jianmin and Yao, Zhuliang and Dai, Qi and Hu, Han},
  booktitle={International Conference on Computer Vision and Pattern Recognition (CVPR)},
  year={2022}
}

Citing SimMIM-data-scaling

@article{xie2022data,
  title={On Data Scaling in Masked Image Modeling},
  author={Xie, Zhenda and Zhang, Zheng and Cao, Yue and Lin, Yutong and Wei, Yixuan and Dai, Qi and Hu, Han},
  journal={arXiv preprint arXiv:2206.04664},
  year={2022}
}

Citing Swin-MoE

@misc{hwang2022tutel,
      title={Tutel: Adaptive Mixture-of-Experts at Scale}, 
      author={Changho Hwang and Wei Cui and Yifan Xiong and Ziyue Yang and Ze Liu and Han Hu and Zilong Wang and Rafael Salas and Jithin Jose and Prabhat Ram and Joe Chau and Peng Cheng and Fan Yang and Mao Yang and Yongqiang Xiong},
      year={2022},
      eprint={2206.03382},
      archivePrefix={arXiv}
}

Download Details:

Author: Microsoft

Official Github: https://github.com/microsoft/Swin-Transformer 

License: MIT

#Microsoft   #data   #data-analysis #data-science 

Swin Transformer: Official Implementation
2.05 GEEK