Learn how to use Google Earth to create training patches for Image Segmentation to be used with any Deep Learning framework.
In some previous stories (here, here and here) we’ve used PyTorch and Fast.ai library to segment clouds in satellite images, using as reference a public dataset (Kaggle’s 38-Cloud: Cloud Segmentation in Satellite Images). However, there are cases when we need to prepare our own dataset from the beginning, and that can be time-consuming without the proper tools.
As it is not my objective here to explain GEE in depth, I will cover just the basics needed to accomplish our final goal, that is to obtain training patches ready to be consumed by any deep learning framework. The workflow I will present here, was done for Sentinel-2 images, but can be easily modified for any other imagery available in the Google Earth Engine platform.
These are the steps:
The first thing we need is a free account to the GEE platform, that can be easily obtained in https://signup.earthengine.google.com/. After that, we will go to the code editor (https://code.earthengine.google.com/), and create a new empty script (NEW *red button on the left). Within the empty script, copy and paste the following code and hit *Run. That would zoom you directly to the Orós reservoir in the northeast of Brazil (Figure 1).
Figure 1 — GEE code editor with a point created in the middle of the Orós reservoir, in Brazil.
To work on a different area, you can adjust the coordinates. A better way (that we will need in the next step, is to create a Point Geometry directly through the interface (*Insert Marker *button) and center the map on the newly created geometry.
The next step is to select an specific image for our region of interest. To do this we will open an image collection with
**ee.ImageCollection**(S2_SR stands for Sentinel-2, Surface Reflectance — Level 2A products) and filter the images containing the point of interest and that lay within a specific period. We will consider a one-month period and display all the images to inspect them visually (Figure-2). The images will be available in the Layers tool. For this specific search, considering that my objective is to identify water surfaces, the last image (indexed as 3) will be the one used for the next step.
Inexture's Deep learning Development Services helps companies to develop Data driven products and solutions. Hire our deep learning developers today to build application that learn and adapt with time.
Looking to attend an AI event or two this year? Below ... Here are the top 22 machine learning conferences in 2020: ... Start Date: June 10th, 2020 ... Join more than 400 other data-heads in 2020 and propel your career forward. ... They feature 30+ data science sessions crafted to bring specialists in different ...
Reviewing challenges, methods and opportunities in deep anomaly detection. This post summarizes a comprehensive survey paper on deep learning for anomaly detection .
Project walk-through on Convolution neural networks using transfer learning. From 2 years of my master’s degree, I found that the best way to learn concepts is by doing the projects.
In this post, we will learn to enable a robot named Vector to detect and recognize a large number of objects. Vector is a cute robot, who can be your companion, and is powered by AI.