# Tuning a model with Bayesian Optimization on Google AI Platform

In this article of the Google ML tutorials series, we will talk about how to use the AI Platform built-in tool to tune the hyperparameters of your Machine Learning model! We will use a method called Bayesian Optimization to navigate the hyperparameters space and then find a set better than the default.

In this article of the Google ML tutorials series, we will talk about how to use the AI Platform built-in tool to tune the hyperparameters of your Machine Learning model! We will use a method called Bayesian Optimization to navigate the hyperparameters space and then find a set better than the default.

In the last article, we trained a `RandomForestClassifier `on a bank marketing dataset. We used the default hyperparameters of the algorithm, reaching quite good results. But what if I want to tune this model, trying to find a better set of hyperparameters? For example, I would like to tune:

• `max_depth`: the maximum depth of each tree of the forest
• `min_samples_split`: the minimum number (or fraction) of samples to split a node of the tree
• `max_features`: the number (or fraction) of input features to use for the
• training of each tree
• `max_samples`: same as `max_features`, but for the rows

The most common ways of searching for the best hyperparameters are the Grid Search and the Random Search methods.

• In the Grid Search, the algorithm trains a model for every single combination of the given hyperparameters and then returns the set with the best performance. This method is really time-consuming, especially when you want to tune more than 2–3 hyperparameters at once because the number of models to train grows exponentially.
• In the Random Search, the algorithm instead picks at random n combinations of hyperparameters and train a model for each of them. Here the problem is in the random word: the algorithm may skip the most effective sets of hyperparameters, especially when we set a low n.

In this tutorial, we will use the Bayesian Optimization method with a little help from Google AI Platform! But first, what is Bayesian Optimization?

Even if on this article we will focus more on the code part than on the theory behind the method, I’ll try to give a quick overview. For a more robust and complete introduction, I suggest taking a look at these articles (1

and 2).

In a certain way, the Bayesian Optimization takes the good from both above

methods: it does pick a subsample of all the possible combinations of hyperparameters, but the picking is done in a more informed way. The algorithm models the distribution of the objective function (say the average precision of our model) with a surrogate function; the domain of this function is the given hyperparameters space. It then explores this distribution trying different sets of hyperparameters. At each trial, it gains more information (in Bayes fashion) about the real distribution of the objective function, so it can move to a more “promising” subset of the domain space.

For this specific reason, keep in mind that we cannot fully parallelize the process of the Bayesian Optimization (as opposed to Grid and Random Search), since each iteration learns from the previous one.

Now let’s train some models! For the tutorial, we will follow the same steps of

• store the data on Google Storage
• write a Python application to train the model
• launch a training job on AI Platform

The big differences are on the Python application itself: we need to add a framework to chain the model’s performance results to the Bayesian Optimization. This framework is called Hypertune: you can install it simply with `pip install cloudml-hypertune`.

## This Week in AI | Rubik's Code

Every week we bring to you the best AI research papers, articles and videos that we have found interesting, cool or simply weird that week. Have fun!

## This Week in AI - Issue #22 | Rubik's Code

Every week we bring to you the best AI research papers, articles and videos that we have found interesting, cool or simply weird that week. Have fun!

## Amsterdam And Helsinki Launch Open AI Registers

Amsterdam And Helsinki Launch Open AI Registers. Amsterdam and Helsinki both launched an Open AI Register in beta version at the Next Generation Internet Summit.

## Why Your Organization Is Struggling to Adopt AI (And How to Fix It)

Why Your Organization Is Struggling to Adopt AI (And How to Fix It). Barely 10% of organizations manage to adopt AI. Find solutions to the top 4 AI obstacles.