Agnes  Sauer

Agnes Sauer

1594621980

Multi-Task Learning with Pytorch and FastAI

Following the concepts presented on my post named Should you use FastAI?, I’d like to show here how to train a Multi-Task deep learning model using the hybrid Pytorch-FastAI approach. The basic idea from the Pytorch-FastAI approach is to define a dataset and a model using Pytorch code and then use FastAI to fit your model. This approach gives you the flexibility to build complicated datasets and models but still be able to use high level FastAI functionality.

Multi-Task Learning (MTL) model is a model that is able to do more than one task. It is as simple as that. In general, as soon as you find yourself optimizing more than one loss function, you are effectively doing MTL.

Image for post

In this demonstration I’ll use the UTKFace dataset. This dataset consists of more than 30k images with labels for age, gender and ethnicity.

Image for post
If you want skip all the talking and jump to the code, here is the [

1. Why Multi-Task Learning?

When you look to someone’s picture and try to predict age, gender and ethnicity, you’re not using completely different parts of your brain right? What I’m trying to say is that you don’t try to understand the image in 3 different ways, each one specific to each task. What you’re doing is using a single understanding that your brain makes of the image and then trying to decode that understanding into age, gender and ethnicity. And besides that, there is knowledge from gender estimation that might help on age estimation, there is knowledge from ethnicity estimation that might help on gender estimation and so forth.

So, why MTL? We’re confident that training a single model to do all tasks we’re interested in yields better results than training a model for each task. Rich Caruana summarizes the goal of MTL pretty nicely: “MTL improves generalization by leveraging the domain-specific information contained in the training signals of related tasks”.

As this is a practical tutorial for training MTL models I’ll not dig deeper into theory and intuition but if you want to read more, check out this amazing post from Sebastian Ruder.

#pytorch #artificial-intelligence #deep-learning #data-science #fastai #data analysis

What is GEEK

Buddha Community

Multi-Task Learning with Pytorch and FastAI
Adaline  Kulas

Adaline Kulas

1594162500

Multi-cloud Spending: 8 Tips To Lower Cost

A multi-cloud approach is nothing but leveraging two or more cloud platforms for meeting the various business requirements of an enterprise. The multi-cloud IT environment incorporates different clouds from multiple vendors and negates the dependence on a single public cloud service provider. Thus enterprises can choose specific services from multiple public clouds and reap the benefits of each.

Given its affordability and agility, most enterprises opt for a multi-cloud approach in cloud computing now. A 2018 survey on the public cloud services market points out that 81% of the respondents use services from two or more providers. Subsequently, the cloud computing services market has reported incredible growth in recent times. The worldwide public cloud services market is all set to reach $500 billion in the next four years, according to IDC.

By choosing multi-cloud solutions strategically, enterprises can optimize the benefits of cloud computing and aim for some key competitive advantages. They can avoid the lengthy and cumbersome processes involved in buying, installing and testing high-priced systems. The IaaS and PaaS solutions have become a windfall for the enterprise’s budget as it does not incur huge up-front capital expenditure.

However, cost optimization is still a challenge while facilitating a multi-cloud environment and a large number of enterprises end up overpaying with or without realizing it. The below-mentioned tips would help you ensure the money is spent wisely on cloud computing services.

  • Deactivate underused or unattached resources

Most organizations tend to get wrong with simple things which turn out to be the root cause for needless spending and resource wastage. The first step to cost optimization in your cloud strategy is to identify underutilized resources that you have been paying for.

Enterprises often continue to pay for resources that have been purchased earlier but are no longer useful. Identifying such unused and unattached resources and deactivating it on a regular basis brings you one step closer to cost optimization. If needed, you can deploy automated cloud management tools that are largely helpful in providing the analytics needed to optimize the cloud spending and cut costs on an ongoing basis.

  • Figure out idle instances

Another key cost optimization strategy is to identify the idle computing instances and consolidate them into fewer instances. An idle computing instance may require a CPU utilization level of 1-5%, but you may be billed by the service provider for 100% for the same instance.

Every enterprise will have such non-production instances that constitute unnecessary storage space and lead to overpaying. Re-evaluating your resource allocations regularly and removing unnecessary storage may help you save money significantly. Resource allocation is not only a matter of CPU and memory but also it is linked to the storage, network, and various other factors.

  • Deploy monitoring mechanisms

The key to efficient cost reduction in cloud computing technology lies in proactive monitoring. A comprehensive view of the cloud usage helps enterprises to monitor and minimize unnecessary spending. You can make use of various mechanisms for monitoring computing demand.

For instance, you can use a heatmap to understand the highs and lows in computing visually. This heat map indicates the start and stop times which in turn lead to reduced costs. You can also deploy automated tools that help organizations to schedule instances to start and stop. By following a heatmap, you can understand whether it is safe to shut down servers on holidays or weekends.

#cloud computing services #all #hybrid cloud #cloud #multi-cloud strategy #cloud spend #multi-cloud spending #multi cloud adoption #why multi cloud #multi cloud trends #multi cloud companies #multi cloud research #multi cloud market

Agnes  Sauer

Agnes Sauer

1594621980

Multi-Task Learning with Pytorch and FastAI

Following the concepts presented on my post named Should you use FastAI?, I’d like to show here how to train a Multi-Task deep learning model using the hybrid Pytorch-FastAI approach. The basic idea from the Pytorch-FastAI approach is to define a dataset and a model using Pytorch code and then use FastAI to fit your model. This approach gives you the flexibility to build complicated datasets and models but still be able to use high level FastAI functionality.

Multi-Task Learning (MTL) model is a model that is able to do more than one task. It is as simple as that. In general, as soon as you find yourself optimizing more than one loss function, you are effectively doing MTL.

Image for post

In this demonstration I’ll use the UTKFace dataset. This dataset consists of more than 30k images with labels for age, gender and ethnicity.

Image for post
If you want skip all the talking and jump to the code, here is the [

1. Why Multi-Task Learning?

When you look to someone’s picture and try to predict age, gender and ethnicity, you’re not using completely different parts of your brain right? What I’m trying to say is that you don’t try to understand the image in 3 different ways, each one specific to each task. What you’re doing is using a single understanding that your brain makes of the image and then trying to decode that understanding into age, gender and ethnicity. And besides that, there is knowledge from gender estimation that might help on age estimation, there is knowledge from ethnicity estimation that might help on gender estimation and so forth.

So, why MTL? We’re confident that training a single model to do all tasks we’re interested in yields better results than training a model for each task. Rich Caruana summarizes the goal of MTL pretty nicely: “MTL improves generalization by leveraging the domain-specific information contained in the training signals of related tasks”.

As this is a practical tutorial for training MTL models I’ll not dig deeper into theory and intuition but if you want to read more, check out this amazing post from Sebastian Ruder.

#pytorch #artificial-intelligence #deep-learning #data-science #fastai #data analysis

PyTorch For Deep Learning 

What is Pytorch ?

Pytorch is a Deep Learning Library Devoloped by Facebook. it can be used for various purposes such as Natural Language Processing , Computer Vision, etc

Prerequisites

Python, Numpy, Pandas and Matplotlib

Tensor Basics

What is a tensor ?

A Tensor is a n-dimensional array of elements. In pytorch, everything is a defined as a tensor.

#pytorch #pytorch-tutorial #pytorch-course #deep-learning-course #deep-learning

Angela  Dickens

Angela Dickens

1598002800

Reducing computational constraints in SimCLR

In a previous blog post, we implemented the SimCLR framework in PyTorch. It was a fun exercise to understand and implement it on a simple dataset of 5 categories with a total of just 1250 training images. From the SimCLR paper, we saw how the framework benefits from larger models and larger batch sizes and can produce results comparable to those of supervised models if enough computing power is available. But these requirements make the framework quite computation-heavy. Wouldn’t it be wonderful if we could have the simplicity and power of this framework and have fewer compute requirements so that this can become accessible to everyone? Moco-v2 comes to the rescue.

Datasets

We will implement Moco-v2 in PyTorch on much bigger datasets this time and train our model on Google Colab. We will work with the Imagenette and Imagewoof datasets this time, made by Jeremy Howard from Fast.AI.

Image for post

Some images from the Imagenette datase

Image for post

Some images from the Imagewoof dataset

A quick summary of these datasets (more info is here):

  • Imagenette consists of 10 easily classified classes from Imagenet with a total of 9479 training and 3935 validation set images.
  • Imagewoof is a dataset of 10 difficult classes from Imagenet — difficult because all classes are dog breeds. There’re a total of 9035 training, and 3939 validation set images.

Contrastive Learning — A Review

The way contrastive learning works in self-supervised learning is based on the idea that we want different outlooks of images from the same category to have similar representations. But since we don’t know which images belong to the same category, what is generally done is that representations of different outlooks of the same image are brought closer to each other. These different views taken pairwise are called as positive pairs.

#fastai #self-supervised-learning #contrastive-learning #deep-learning #pytorch #deep learning

Jerad  Bailey

Jerad Bailey

1598891580

Google Reveals "What is being Transferred” in Transfer Learning

Recently, researchers from Google proposed the solution of a very fundamental question in the machine learning community — What is being transferred in Transfer Learning? They explained various tools and analyses to address the fundamental question.

The ability to transfer the domain knowledge of one machine in which it is trained on to another where the data is usually scarce is one of the desired capabilities for machines. Researchers around the globe have been using transfer learning in various deep learning applications, including object detection, image classification, medical imaging tasks, among others.

#developers corner #learn transfer learning #machine learning #transfer learning #transfer learning methods #transfer learning resources