Chelsie  Towne

Chelsie Towne

1597215480

Semi-supervised learning | Algorithmia Blog

Semi-supervised learning is the type of machine learning that uses a combination of a small amount of labeled data and a large amount of unlabeled data to train models. This approach to machine learning is a combination of supervised machine learning, which uses labeled training data, and unsupervised learning, which uses unlabeled training data.

What is supervised and unsupervised learning?

In order to understand semi-supervised learning, it helps to first understand supervised and unsupervised learning.

Every machine learning model or algorithm needs to learn from data. For supervised learning, models are trained with labeled datasets, but labeled data can be hard to find. The reason labeled data is used is so that when the algorithm predicts the label, the difference between the prediction and the label can be calculated and then minimized for accuracy.

Unsupervised learning doesn’t require labeled data, because unsupervised models learn to identify patterns and trends or categorize data without labeling it. This means that there is more data available in the world to use for unsupervised learning, since most data isn’t labeled.

#machine learning

What is GEEK

Buddha Community

Semi-supervised learning | Algorithmia Blog
Dejah  Reinger

Dejah Reinger

1601344800

Machine Learning | Everything you need to know

Machine learning is enabling computers to tackle tasks that have, until now, only been carried out by people.

From driving cars to translating speech, machine learning is driving an explosion in the capabilities of artificial intelligence— helping software make sense of the messy and unpredictable real world.

But what exactly is machine learning and what is making the current boom in machine learning possible?

#supervised-learning #machine-learning #reinforcement-learning #semi-supervised-learning #unsupervised-learning

Snorkel: Build ML Models without Labeled Data

With bunches of hands-on tools, building models on labeled data has already been an easy task for data scientists. However, in the real world, many tasks are not well-formatted supervised learning problems: labeled data may be expensive or even impossible to obtain. An alternative approach is to leverage cheap and low-quality data to achieve supervision, which is the topic of this article: weak supervision

In the following sections, I will go through the concepts of weak supervision. I will also introduce a tool called Snorkel, which is developed by Stanford. Finally, I will show you how HK01 uses Snorkel to capture the trend topics on Facebook, and therefore enhance our recommender engine.


There are several paradigms of algorithm to remedy the situation if a large amount of high-quality, hand-labeled training data is not available. As you can see in the following diagram, if you don’t have enough data, you have to find another source of knowledge to achieve a comparable level of supervision to traditional supervision.

source: http://ai.stanford.edu/blog/weak-supervision/

Choosing one among these paradigms is pretty tricky. It depends on what you have on your hands. Transfer learning is great for tasks with a well-trained model in similar tasks, like fine-tuning ImageNet model with your own categories; while you may have some presumptions on the topological structure, such as the shape of clusters, you may prefer semi-supervised learning.

So, what kind of situation is the best suit for weak supervision?

You may have some ideas after reading the definition of weak supervision. Yes, if you have plenty of domain experts but lack of data, weak supervision is your pick.

The reason behind is revealed in the definition: weak supervision enables learning from low-quality and noisy labels. In other words, you can still find patterns, just like what supervised learning do, unless you should supplement multiple noisy labels for each training sample so that the model can generalize knowledge from them.


weak supervision enables supervision by multiple noisy labels

The rationale of weak supervision relies on the fact that noisy data is usually much easier and cheaper to obtain than high-quality data. Imagine you are working for an insurance company and your boss asks for a recommender engine of a whole-new product line which, of course, you don’t have data. With sales experts, we can set up some rules which are “mostly correct” like the new product is more attractive to the elderly. These rules are not perfectly correct; but, they are good enough to provide your models collective intelligence. And, most importantly, these rules are easier to obtain than perfectly hand-labeled data.

So, the next question is: **how can we inject these rules into our ML models? **The answer is Snorkel.


Snorkel is a system developed by Stanford which allows you to program the rules into ML models. The key idea of Snorkel is to build the generative model which represents the causal relationship between the true label and the noisy labels.

The left-hand side of the above diagram is the probabilistic model representing the generative process from the true label to the noisy labels. Although the true label is unobservable, we can still learn the accuracies and correlations by the agreements and disagreements from different noisy labels. Hence, we can estimate the P(L|y) of each noisy label, which is essentially an indicator of quality. By aggregating the noisy labels, we get the estimated true label and use it to train our model.

In Snorkel, noisy labels are programmed as labeling functions. A label function is basically a python function which hard-codes a rule to determine the label. For example, if you’re writing a program to determine which an email is spam, the program should be something like:

from snorkel.labeling import labeling_function

SPAM = 1
NORMAL = 0
ABSTAIN = -1
@labeling_function()
def contain_hyperlink(x):
    if 'http' in x:
        return SPAM 
    else:
        return NORMAL
@labeling_function()
def contain_foul_language(x):
    for each in x:
        if each in foul_language:
            return SPAM
        else:
            return NORMAL

In this toy example, you can see the basic elements of Snorkel.

  • define the labels. In this example, the labels are SPAM, NORMAL & ABSTAIN. ABSTAIN is the label used when you cannot determine the label.
  • define labeling functions. Add a decorator @labeling_function() to declare.

#machine-learning #deep-learning #transfer-learning #semi-supervised-learning #weak-supervision #deep learning

Elton  Bogan

Elton Bogan

1604091840

Supervised Learning vs Unsupervised Learning

Note from Towards Data Science’s editors:_ While we allow independent authors to publish articles in accordance with our rules and guidelines, we do not endorse each author’s contribution. You should not rely on an author’s works without seeking professional advice. See our Reader Terms for details._

Nowadays, nearly everything in our lives can be quantified by data. Whether it involves search engine results, social media usage, weather trackers, cars, or sports, data is always being collected to enhance our quality of life. How do we get from all this raw data to improve the level of performance? This article will introduce us to the tools and techniques developed to make sense of unstructured data and discover hidden patterns. Specifically, the main topics that are covered are:

1. Supervised & Unsupervised Learning and the main techniques corresponding to each one (Classification and Clustering, respectively).

2. An in-depth look at the K-Means algorithm

Goals

1. Understanding the many different techniques used to discover patterns in a set of data

2. In-depth understanding of the K-Means algorithm

1.1 Unsupervised and supervised learning

In unsupervised learning, we are trying to discover hidden patterns in data, when we don’t have any labels. We will go through what hidden patterns are and what labels are, and we will go through real data examples.

What is unsupervised learning?

First, let’s step back to what learning even means. In machine learning in statistics, we are typically trying to find hidden patterns in data. Ideally, we want these hidden patterns to help us in some way. For instance, to help us understand some scientific results, to improve our user experience, or to help us maximize profit in some investment. Supervised learning is when we learn from data, but we have labels for all the data we have seen so far. Unsupervised learning is when we learn from data, but we don’t have any labels.

Let’s use an example of an email. In general, it can be hard to keep our inbox in check. We get many e-mails every day and a big problem is spam. In fact, it would be an even bigger problem if e-mail providers, like Gmail, were not so effective at keeping spam out of our inboxes. But how do they know whether a particular e-mail is a spam or not? This is our first example of a machine learning problem.

Every machine learning problem has a data set, which is a collection of data points that help us learn. Your data set will be all the e-mails that are sent over a month. Each data point will be a single e-mail. Whenever you get an e-mail, you can quickly tell whether it’s spam. You might hit a button to label any particular e-mail as spam or not spam. Now you can imagine that each of your data points has one of two labels, spam or not spam. In the future, you will keep getting emails, but you won’t know in advance which label it should have, spam or not spam. The machine learning problem is to predict whether a new label for a new email is spam or not spam. This means that we want to predict the label of the next email. If our machine learning algorithm works, it can put all the spam in a separate folder. This spam problem is an example of supervised learning. You can imagine a teacher, or supervisor, telling you the label of each data point, which is whether each e-mail is spam or not spam. The supervisor might be able to tell us whether the labels we predicted were correct.

So what is unsupervised learning? Let’s try another example of a machine learning problem. Imagine you are looking at your emails, and realize you got too many emails. It would be helpful if you could read all the emails that are on the same topic at the same time. So, you might run a machine learning algorithm that groups together similar emails. After you have run your machine learning algorithm, you find that there are natural groups of emails in your inbox. This is an example of an unsupervised learning problem. You did not have any labels because no labels were made for each email, which means there is no supervisor.

#reinforcement-learning #supervised-learning #unsupervised-learning #k-means-clustering #machine-learning

An Implementation of Semi-Supervised Learning

Machine learning can be divided into several categories, where the most popular is supervised and unsupervised learning. Both methods are the two which are very commonly used in the field of data science. **Supervised **learning algorithms are used when all samples in a dataset are completely labeled, while **unsupervised **algorithms are employed to handle dataset without labels at all.

On the other hand, what if we only got partially labeled data? For example, we got a dataset of 10000 samples but only 1500 of them are labeled, while the rest are entirely unlabeled. In such cases, we can utilize what’s so-called as semi-supervised learning method. In this article, we are going to get deeper into the code of one of the simplest semi-supervised algorithm, namely self-learning.

Semi-supervised learning is applicable in a case where we only got partially labeled data.

The self-learning algorithm itself works like this:

  1. Train the classifier with the existing labeled dataset.
  2. Predict a portion of samples using the trained classifier.
  3. Add the predicted data with high confidentiality score into training set.
  4. Repeat all steps above.

The dataset used in this project is IMDB movie reviews which can easily be downloaded through Keras API. The objective is pretty straightforward: we need to classify whether a text contains positive or negative review. In other words, this problem is just like a sentiment analysis in general. The dataset itself is already separated into train/test, where each of the sets are having unique 25000 review texts.

#ai #nlp #deep-learning #machine-learning #semi-supervised-learning

Jackson  Crist

Jackson Crist

1618198560

What Is Semi-Supervised Machine Learning?

Machine learning has proven to be very efficient at classifying images and other unstructured data, a task that is very difficult to handle with classic rule-based software. But before machine learning models can perform classification tasks, they need to be trained on a lot of annotated examples. Data annotation is a slow and manual process that requires humans reviewing training examples one by one and giving them their right label.

In fact, data annotation is such a vital part of machine learning that the growing popularity of the technology has given rise to a huge market for labeled data. From Amazon’s Mechanical Turk to startups such as LabelBox, ScaleAI, and Samasource, there are dozens of platforms and companies whose job is to annotate data to train machine learning systems.

Fortunately, for some classification tasks, you don’t need to label all your training examples. Instead, you can use semi-supervised learning, a machine learning technique that can automate the data-labeling process with a bit of help.

#ai & machine learning #automate #data-labeling #machine learning #semi-supervised learning