Introduction

Training an image segmentation model on new images can be daunting, especially when you need to label your own data. To make this task easier and faster, we built a user-friendly tool that lets you build this entire process in a single Jupyter notebook. In the sections below, we will show you how our tool lets you:

  1. Manually label your own images
  2. Build an effective segmentation model through transfer learning
  3. Visualize the model and its results
  4. Share your project as a Docker image

The main benefits of this tool are that it is easy-to-useall in one platform, and well-integrated with existing data science workflows. Through interactive widgets and command prompts, we built a user-friendly way to label images and train the model. On top of that, everything can run in a single Jupyter notebook, making it quick and easy to spin up a model, without much overhead. Lastly, by working in a Python environment and using standard libraries like Tensorflow and Matplotlib, this tool can be well-integrated into existing data science workflows, making it ideal for uses like scientific research.

For instance, in microbiology, it can be very useful to segment microscopy images of cells. However, tracking cells over time can easily result in the need to segment hundreds of images, which can be very difficult to do manually. In this article, we will use microscopy images of yeast cells as our dataset and show how we built our tool to differentiate between the background, mother cells, and daughter cells.

1. Labelling

There are many existing tools to create labelled masks for images, including LabelmeImageJ, and even the graphics editor GIMP. While these are all great tools, they can’t be integrated within a Jupyter notebook, making them harder to use with many existing workflows. Fortunately, Jupyter Widgets make it easy for us to make interactive components and connect them with the rest of our Python code.

To create training masks in the notebook, we have two problems to solve:

  1. Select parts of an image with a mouse
  2. Easily switch between images and select the class to label

To solve the first problem, we used the Matplotlib widget backend and the built-in LassoSelector. The LassoSelector handles drawing a line to show what you are selecting, but we need a little bit of custom code to draw the masks as an overlay:

Class to manage a Lasso Selector for Matplotlib in a Jupyter notebook

For the second problem, we added nice looking buttons and other controls using ipywidgets:

Image for post

We combined these elements (along with improvements like scroll to zoom) to make a single labelling controller object. Now we can take microscopy images of yeast and segment the mother cells and daughter cells:

Demo of lasso selection image labeler

You can check out the full object, which lets you scroll to zoom, right click to pan, and select multiple classes here.

Now we can label a small number of images in the notebook, save them into the correct folder structure, and start to train CNN!

2. Model Training

The Model

U-net is a convolutional neural network that was initially designed to segment biomedical images but has been successful for many other types of images. It builds upon existing convolutional networks to work better with very few training images and make more precise segmentations. It is a state-of-the-art model that is also easy to implement using the [segmentation_models](https://github.com/qubvel/segmentation_models) library.

#jupyter-notebook #visualization #transfer-learning #unet #image-segmentation #deep learning

How we built an easy-to-use image segmentation tool
1.50 GEEK