1641875880
Tensor2Tensor
Tensor2Tensor, or T2T for short, is a library of deep learning models and datasets designed to make deep learning more accessible and accelerate ML research.
T2T was developed by researchers and engineers in the Google Brain team and a community of users. It is now deprecated — we keep it running and welcome bug-fixes, but encourage users to use the successor library Trax.
This iPython notebook explains T2T and runs in your browser using a free VM from Google, no installation needed. Alternatively, here is a one-command version that installs T2T, downloads MNIST, trains a model and evaluates it:
pip install tensor2tensor && t2t-trainer \
--generate_data \
--data_dir=~/t2t_data \
--output_dir=~/t2t_train/mnist \
--problem=image_mnist \
--model=shake_shake \
--hparams_set=shake_shake_quick \
--train_steps=1000 \
--eval_steps=100
Below we list a number of tasks that can be solved with T2T when you train the appropriate model on the appropriate problem. We give the problem and model below and we suggest a setting of hyperparameters that we know works well in our setup. We usually run either on Cloud TPUs or on 8-GPU machines; you might need to modify the hyperparameters if you run on a different setup.
For evaluating mathematical expressions at the character level involving addition, subtraction and multiplication of both positive and negative decimal numbers with variable digits assigned to symbolic variables, use
--problem=algorithmic_math_two_variables
You can try solving the problem with different transformer models and hyperparameters as described in the paper:
--model=transformer
--hparams_set=transformer_tiny
--model=universal_transformer
--hparams_set=universal_transformer_tiny
--model=universal_transformer
--hparams_set=adaptive_universal_transformer_tiny
For answering questions based on a story, use
--problem=babi_qa_concat_task1_1k
You can choose the bAbi task from the range [1,20] and the subset from 1k or 10k. To combine test data from all tasks into a single test set, use --problem=babi_qa_concat_all_tasks_10k
For image classification, we have a number of standard data-sets:
--problem=image_imagenet
, or one of the re-scaled versions (image_imagenet224
, image_imagenet64
, image_imagenet32
)--problem=image_cifar10
(or --problem=image_cifar10_plain
to turn off data augmentation)--problem=image_cifar100
--problem=image_mnist
For ImageNet, we suggest to use the ResNet or Xception, i.e., use --model=resnet --hparams_set=resnet_50
or --model=xception --hparams_set=xception_base
. Resnet should get to above 76% top-1 accuracy on ImageNet.
For CIFAR and MNIST, we suggest to try the shake-shake model: --model=shake_shake --hparams_set=shakeshake_big
. This setting trained for --train_steps=700000
should yield close to 97% accuracy on CIFAR-10.
For (un)conditional image generation, we have a number of standard data-sets:
--problem=img2img_celeba
for image-to-image translation, namely, superresolution from 8x8 to 32x32.--problem=image_celeba256_rev
for a downsampled 256x256.--problem=image_cifar10_plain_gen_rev
for class-conditional 32x32 generation.--problem=image_lsun_bedrooms_rev
--problem=image_text_ms_coco_rev
for text-to-image generation.--problem=image_imagenet32_gen_rev
for 32x32 or --problem=image_imagenet64_gen_rev
for 64x64.We suggest to use the Image Transformer, i.e., --model=imagetransformer
, or the Image Transformer Plus, i.e., --model=imagetransformerpp
that uses discretized mixture of logistics, or variational auto-encoder, i.e., --model=transformer_ae
. For CIFAR-10, using --hparams_set=imagetransformer_cifar10_base
or --hparams_set=imagetransformer_cifar10_base_dmol
yields 2.90 bits per dimension. For Imagenet-32, using --hparams_set=imagetransformer_imagenet32_base
yields 3.77 bits per dimension.
For language modeling, we have these data-sets in T2T:
--problem=languagemodel_ptb10k
for word-level modeling and --problem=languagemodel_ptb_characters
for character-level modeling.--problem=languagemodel_lm1b32k
for subword-level modeling and --problem=languagemodel_lm1b_characters
for character-level modeling.We suggest to start with --model=transformer
on this task and use --hparams_set=transformer_small
for PTB and --hparams_set=transformer_base
for LM1B.
For the task of recognizing the sentiment of a sentence, use
--problem=sentiment_imdb
We suggest to use --model=transformer_encoder
here and since it is a small data-set, try --hparams_set=transformer_tiny
and train for few steps (e.g., --train_steps=2000
).
For speech-to-text, we have these data-sets in T2T:
Librispeech (US English): --problem=librispeech
for the whole set and --problem=librispeech_clean
for a smaller but nicely filtered part.
Mozilla Common Voice (US English): --problem=common_voice
for the whole set --problem=common_voice_clean
for a quality-checked subset.
For summarizing longer text into shorter one we have these data-sets:
--problem=summarize_cnn_dailymail32k
We suggest to use --model=transformer
and --hparams_set=transformer_prepend
for this task. This yields good ROUGE scores.
There are a number of translation data-sets in T2T:
--problem=translate_ende_wmt32k
--problem=translate_enfr_wmt32k
--problem=translate_encs_wmt32k
--problem=translate_enzh_wmt32k
--problem=translate_envi_iwslt32k
--problem=translate_enes_wmt32k
You can get translations in the other direction by appending _rev
to the problem name, e.g., for German-English use --problem=translate_ende_wmt32k_rev
(note that you still need to download the original data with t2t-datagen --problem=translate_ende_wmt32k
).
For all translation problems, we suggest to try the Transformer model: --model=transformer
. At first it is best to try the base setting, --hparams_set=transformer_base
. When trained on 8 GPUs for 300K steps this should reach a BLEU score of about 28 on the English-German data-set, which is close to state-of-the art. If training on a single GPU, try the --hparams_set=transformer_base_single_gpu
setting. For very good results or larger data-sets (e.g., for English-French), try the big model with --hparams_set=transformer_big
.
See this example to know how the translation works.
Here's a walkthrough training a good English-to-German translation model using the Transformer model from Attention Is All You Need on WMT data.
pip install tensor2tensor
# See what problems, models, and hyperparameter sets are available.
# You can easily swap between them (and add new ones).
t2t-trainer --registry_help
PROBLEM=translate_ende_wmt32k
MODEL=transformer
HPARAMS=transformer_base_single_gpu
DATA_DIR=$HOME/t2t_data
TMP_DIR=/tmp/t2t_datagen
TRAIN_DIR=$HOME/t2t_train/$PROBLEM/$MODEL-$HPARAMS
mkdir -p $DATA_DIR $TMP_DIR $TRAIN_DIR
# Generate data
t2t-datagen \
--data_dir=$DATA_DIR \
--tmp_dir=$TMP_DIR \
--problem=$PROBLEM
# Train
# * If you run out of memory, add --hparams='batch_size=1024'.
t2t-trainer \
--data_dir=$DATA_DIR \
--problem=$PROBLEM \
--model=$MODEL \
--hparams_set=$HPARAMS \
--output_dir=$TRAIN_DIR
# Decode
DECODE_FILE=$DATA_DIR/decode_this.txt
echo "Hello world" >> $DECODE_FILE
echo "Goodbye world" >> $DECODE_FILE
echo -e 'Hallo Welt\nAuf Wiedersehen Welt' > ref-translation.de
BEAM_SIZE=4
ALPHA=0.6
t2t-decoder \
--data_dir=$DATA_DIR \
--problem=$PROBLEM \
--model=$MODEL \
--hparams_set=$HPARAMS \
--output_dir=$TRAIN_DIR \
--decode_hparams="beam_size=$BEAM_SIZE,alpha=$ALPHA" \
--decode_from_file=$DECODE_FILE \
--decode_to_file=translation.en
# See the translations
cat translation.en
# Evaluate the BLEU score
# Note: Report this BLEU score in papers, not the internal approx_bleu metric.
t2t-bleu --translation=translation.en --reference=ref-translation.de
# Assumes tensorflow or tensorflow-gpu installed
pip install tensor2tensor
# Installs with tensorflow-gpu requirement
pip install tensor2tensor[tensorflow_gpu]
# Installs with tensorflow (cpu) requirement
pip install tensor2tensor[tensorflow]
Binaries:
# Data generator
t2t-datagen
# Trainer
t2t-trainer --registry_help
Library usage:
python -c "from tensor2tensor.models.transformer import Transformer"
bottom
and top
transformations, which are specified per-feature in the model.t2t-datagen
and the training script t2t-trainer
.Problems consist of features such as inputs and targets, and metadata such as each feature's modality (e.g. symbol, image, audio) and vocabularies. Problem features are given by a dataset, which is stored as a TFRecord
file with tensorflow.Example
protocol buffers. All problems are imported in all_problems.py
or are registered with @registry.register_problem
. Run t2t-datagen
to see the list of available problems and download them.
T2TModel
s define the core tensor-to-tensor computation. They apply a default transformation to each input and output so that models may deal with modality-independent tensors (e.g. embeddings at the input; and a linear transform at the output to produce logits for a softmax over classes). All models are imported in the models
subpackage, inherit from T2TModel
, and are registered with @registry.register_model
.
Hyperparameter sets are encoded in HParams
objects, and are registered with @registry.register_hparams
. Every model and problem has a HParams
. A basic set of hyperparameters are defined in common_hparams.py
and hyperparameter set functions can compose other hyperparameter set functions.
The trainer binary is the entrypoint for training, evaluation, and inference. Users can easily switch between problems, models, and hyperparameter sets by using the --model
, --problem
, and --hparams_set
flags. Specific hyperparameters can be overridden with the --hparams
flag. --schedule
and related flags control local and distributed training/evaluation (distributed training documentation).
T2T's components are registered using a central registration mechanism that enables easily adding new ones and easily swapping amongst them by command-line flag. You can add your own components without editing the T2T codebase by specifying the --t2t_usr_dir
flag in t2t-trainer
.
You can do so for models, hyperparameter sets, modalities, and problems. Please do submit a pull request if your component might be useful to others.
See the example_usr_dir
for an example user directory.
To add a new dataset, subclass Problem
and register it with @registry.register_problem
. See TranslateEndeWmt8k
for an example. Also see the data generators README.
Click this button to open a Workspace on FloydHub. You can use the workspace to develop and test your code on a fully configured cloud GPU machine.
Tensor2Tensor comes preinstalled in the environment, you can simply open a Terminal and run your code.
# Test the quick-start on a Workspace's Terminal with this command
t2t-trainer \
--generate_data \
--data_dir=./t2t_data \
--output_dir=./t2t_train/mnist \
--problem=image_mnist \
--model=shake_shake \
--hparams_set=shake_shake_quick \
--train_steps=1000 \
--eval_steps=100
Note: Ensure compliance with the FloydHub Terms of Service.
When referencing Tensor2Tensor, please cite this paper.
@article{tensor2tensor,
author = {Ashish Vaswani and Samy Bengio and Eugene Brevdo and
Francois Chollet and Aidan N. Gomez and Stephan Gouws and Llion Jones and
\L{}ukasz Kaiser and Nal Kalchbrenner and Niki Parmar and Ryan Sepassi and
Noam Shazeer and Jakob Uszkoreit},
title = {Tensor2Tensor for Neural Machine Translation},
journal = {CoRR},
volume = {abs/1803.07416},
year = {2018},
url = {http://arxiv.org/abs/1803.07416},
}
Tensor2Tensor was used to develop a number of state-of-the-art models and deep learning methods. Here we list some papers that were based on T2T from the start and benefited from its features and architecture in ways described in the Google Research Blog post introducing T2T.
NOTE: This is not an official Google product.
Download Details:
Author: tensorflow
Source Code: https://github.com/tensorflow/tensor2tensor
License: Apache-2.0 License
#tensorflow #python #machine-learning #deep-learning
1618317562
View more: https://www.inexture.com/services/deep-learning-development/
We at Inexture, strategically work on every project we are associated with. We propose a robust set of AI, ML, and DL consulting services. Our virtuoso team of data scientists and developers meticulously work on every project and add a personalized touch to it. Because we keep our clientele aware of everything being done associated with their project so there’s a sense of transparency being maintained. Leverage our services for your next AI project for end-to-end optimum services.
#deep learning development #deep learning framework #deep learning expert #deep learning ai #deep learning services
1603735200
The Deep Learning DevCon 2020, DLDC 2020, has exciting talks and sessions around the latest developments in the field of deep learning, that will not only be interesting for professionals of this field but also for the enthusiasts who are willing to make a career in the field of deep learning. The two-day conference scheduled for 29th and 30th October will host paper presentations, tech talks, workshops that will uncover some interesting developments as well as the latest research and advancement of this area. Further to this, with deep learning gaining massive traction, this conference will highlight some fascinating use cases across the world.
Here are ten interesting talks and sessions of DLDC 2020 that one should definitely attend:
Also Read: Why Deep Learning DevCon Comes At The Right Time
By Dipanjan Sarkar
**About: **Adversarial Robustness in Deep Learning is a session presented by Dipanjan Sarkar, a Data Science Lead at Applied Materials, as well as a Google Developer Expert in Machine Learning. In this session, he will focus on the adversarial robustness in the field of deep learning, where he talks about its importance, different types of adversarial attacks, and will showcase some ways to train the neural networks with adversarial realisation. Considering abstract deep learning has brought us tremendous achievements in the fields of computer vision and natural language processing, this talk will be really interesting for people working in this area. With this session, the attendees will have a comprehensive understanding of adversarial perturbations in the field of deep learning and ways to deal with them with common recipes.
Read an interview with Dipanjan Sarkar.
By Divye Singh
**About: **Imbalance Handling with Combination of Deep Variational Autoencoder and NEATER is a paper presentation by Divye Singh, who has a masters in technology degree in Mathematical Modeling and Simulation and has the interest to research in the field of artificial intelligence, learning-based systems, machine learning, etc. In this paper presentation, he will talk about the common problem of class imbalance in medical diagnosis and anomaly detection, and how the problem can be solved with a deep learning framework. The talk focuses on the paper, where he has proposed a synergistic over-sampling method generating informative synthetic minority class data by filtering the noise from the over-sampled examples. Further, he will also showcase the experimental results on several real-life imbalanced datasets to prove the effectiveness of the proposed method for binary classification problems.
By Dongsuk Hong
About: This is a paper presentation given by Dongsuk Hong, who is a PhD in Computer Science, and works in the big data centre of Korea Credit Information Services. This talk will introduce the attendees with machine learning and deep learning models for predicting self-employment default rates using credit information. He will talk about the study, where the DNN model is implemented for two purposes — a sub-model for the selection of credit information variables; and works for cascading to the final model that predicts default rates. Hong’s main research area is data analysis of credit information, where she is particularly interested in evaluating the performance of prediction models based on machine learning and deep learning. This talk will be interesting for the deep learning practitioners who are willing to make a career in this field.
#opinions #attend dldc 2020 #deep learning #deep learning sessions #deep learning talks #dldc 2020 #top deep learning sessions at dldc 2020 #top deep learning talks at dldc 2020
1617331277
The Association of Data Scientists (AdaSci), the premier global professional body of data science and ML practitioners, has announced a hands-on workshop on deep learning model deployment on February 6, Saturday.
Over the last few years, the applications of deep learning models have increased exponentially, with use cases ranging from automated driving, fraud detection, healthcare, voice assistants, machine translation and text generation.
Typically, when data scientists start machine learning model development, they mostly focus on the algorithms to use, feature engineering process, and hyperparameters to make the model more accurate. However, model deployment is the most critical step in the machine learning pipeline. As a matter of fact, models can only be beneficial to a business if deployed and managed correctly. Model deployment or management is probably the most under discussed topic.
In this workshop, the attendees get to learn about ML lifecycle, from gathering data to the deployment of models. Researchers and data scientists can build a pipeline to log and deploy machine learning models. Alongside, they will be able to learn about the challenges associated with machine learning models in production and handling different toolkits to track and monitor these models once deployed.
#hands on deep learning #machine learning model deployment #machine learning models #model deployment #model deployment workshop
1593529260
In the previous blog, we looked into the fact why Few Shot Learning is essential and what are the applications of it. In this article, I will be explaining the Relation Network for Few-Shot Classification (especially for image classification) in the simplest way possible. Moreover, I will be analyzing the Relation Network in terms of:
Moreover, effectiveness will be evaluated on the accuracy, time required for training, and the number of required training parameters.
Please watch the GitHub repository to check out the implementations and keep updated with further experiments.
In few shot classification, our objective is to design a method which can identify any object images by analyzing few sample images of the same class. Let’s the take one example to understand this. Suppose Bob has a client project to design a 5 class classifier, where 5 classes can be anything and these 5 classes can even change with time. As discussed in previous blog, collecting the huge amount of data is very tedious task. Hence, in such cases, Bob will rely upon few shot classification methods where his client can give few set of example images for each classes and after that his system can perform classification young these examples with or without the need of additional training.
In general, in few shot classification four terminologies (N way, K shot, support set, and query set) are used.
At this point, someone new to this concept will have doubt regarding the need of support and query set. So, let’s understand it intuitively. Whenever humans sees any object for the first time, we get the rough idea about that object. Now, in future if we see the same object second time then we will compare it with the image stored in memory from the when we see it for the first time. This applied to all of our surroundings things whether we see, read, or hear. Similarly, to recognise new images from query set, we will provide our model a set of examples i.e., support set to compare.
And this is the basic concept behind Relation Network as well. In next sections, I will be giving the rough idea behind Relation Network and I will be performing different experiments on 102-flower dataset.
The Core idea behind Relation Network is to learn the generalized image representations for each classes using support set such that we can compare lower dimensional representation of query images with each of the class representations. And based on this comparison decide the class of each query images. Relation Network has two modules which allows us to perform above two tasks:
Training/Testing procedure:
We can define the whole procedure in just 5 steps.
Few things to know during the training is that we will use only images from the set of selective class, and during the testing, we will be using images from unseen classes. For example, from the 102-flower dataset, we will use 50% classes for training, and rest will be used for validation and testing. Moreover, in each episode, we will randomly select 5 classes to create the support and query set and follow the above 5 steps.
That is all need to know about the implementation point of view. Although the whole process is simple and easy to understand, I’ll recommend reading the published research paper, Learning to Compare: Relation Network for Few-Shot Learning, for better understanding.
#deep-learning #few-shot-learning #computer-vision #machine-learning #deep learning #deep learning
1595573880
In this post, we will investigate how easily we can train a Deep Q-Network (DQN) agent (Mnih et al., 2015) for Atari 2600 games using the Google reinforcement learning library Dopamine. While many RL libraries exist, this library is specifically designed with four essential features in mind:
_We believe these principles makes __Dopamine _one of the best RL learning environment available today. Additionally, we even got the library to work on Windows, which we think is quite a feat!
In my view, the visualization of any trained RL agent is an absolute must in reinforcement learning! Therefore, we will (of course) include this for our own trained agent at the very end!
We will go through all the pieces of code required (which is** minimal compared to other libraries**), but you can also find all scripts needed in the following Github repo.
The general premise of deep reinforcement learning is to
“derive efficient representations of the environment from high-dimensional sensory inputs, and use these to generalize past experience to new situations.”
- Mnih et al. (2015)
As stated earlier, we will implement the DQN model by Deepmind, which only uses raw pixels and game score as input. The raw pixels are processed using convolutional neural networks similar to image classification. The primary difference lies in the objective function, which for the DQN agent is called the optimal action-value function
where_ rₜ is the maximum sum of rewards at time t discounted by γ, obtained using a behavior policy π = P(a_∣_s)_ for each observation-action pair.
There are relatively many details to Deep Q-Learning, such as Experience Replay (Lin, 1993) and an _iterative update rule. _Thus, we refer the reader to the original paper for an excellent walk-through of the mathematical details.
One key benefit of DQN compared to previous approaches at the time (2015) was the ability to outperform existing methods for Atari 2600 games using the same set of hyperparameters and only pixel values and game score as input, clearly a tremendous achievement.
This post does not include instructions for installing Tensorflow, but we do want to stress that you can use both the CPU and GPU versions.
Nevertheless, assuming you are using Python 3.7.x
, these are the libraries you need to install (which can all be installed via pip
):
tensorflow-gpu=1.15 (or tensorflow==1.15 for CPU version)
cmake
dopamine-rl
atari-py
matplotlib
pygame
seaborn
pandas
#reinforcement-learning #q-learning #games #machine-learning #deep-learning #deep learning