This article is part 3 in my series detailing the use and development of SAEMI, a web application I created to perform high throughput quantitative analysis of electron microscopy images. You can check out the app here and its github here. Also, check out part 1 here (where I talk about the motivation behind creating this app) and part 2 here (where I give a walk-through of how to use the app). In this article, I will talk about how I trained a deep learning model to segment electron microscopy (EM) images for quantitative analysis of nanoparticles.
In order to train my model, I used a data-set taken from NFFA-EUROPE with over 20,000 Scanning Electron Microscopy (SEM) images taken from CNR-IOM in Trieste, Italy. This was one of the only large databases of EM images that I could actually access for free as you can imagine, most EM images are subject to strong copyright issues. The database consists of over 20,000 SEM images separated into 10 different categories (Biological, Fibres, Films_coated_Surface, MEMS_devices_and_electrodes, Nanowires, Particles, Patterned_surface, Porous_Sponge, Powder, Tips). For my training, however, I limited the training images to only be taken from the Particles category, which consisted of just under 4000 images.
One of the main reasons for this is because for almost all of the other categories, there wasn’t actually a way to obtain a useful size distribution from the image. For example, consider the EM image of fibres shown in Figure 1a. After segmenting the image, I could calculate the sizes of each fibre within the image, but you can also clearly see that the fibres extend out past the image. Therefore, the sizes I calculate are limited to what’s presented in the EM image and I can’t extract how long the fibres are from the image alone.
Compare that to Figure 1b of the EM image of particles where the sizes shown in the image are clearly the size of the entire particle. Not only that, but images in this category tended to have the least degree of occlusion which made labeling and training much easier.
Within the Particles category, all the images were 768 pixels tall and 1024 pixels wide. Most of the particles were roughly circular in shape, with some images featuring hundreds of particles and other images featuring only one particle. The size of the particles in pixels were also quite varied due to differences in the magnification which ranged from 1 micron to 10 nanometers. An example of some of the particle images in the data-set is shown in Figure 2 below:
To begin the training process, the raw images first had to be preprocessed. For the most part, this meant removing the banners that contained image metadata while retaining as much useful image data as possible. To accomplish this, I used a technique called “reflection padding”. Essentially, I replaced the banner region with its reflections along the top and bottom. An example of this is shown in Figure 3 with the reflections highlighted in red.
In order to perform this transformation, however, I first had to detect the banner regions using OpenCV. To start, I took my image and created a binary mask of the image where all pixel values above 250 becomes 255 and all pixels below the threshold becomes 0. The code used to perform this is shown below along with the result in Figure 4.
import cv2 # convert image to grayscale grayscale = cv2.cvtColor(img, cv2.COLOR_BGR2GRAY) # create binary mask based on threshold of 250 ret, binary = cv2.threshold(gray, 250, 255, cv2.THRESH_BINARY)
While thresholding was already fairly effective in finding the banner region, I could still detect some elements of the image that weren’t part of the banner. To make sure only the banner region was detected and nothing else remained, I used erosion and dilation on the thresholded image to find where the vertical and horizontal lines are within the image. Erosion is an image processing technique where a kernel of a given size is scanned across an image. As the kernel travels along the image, the minimum pixel value is found and the pixel value at the center is replaced by that minimum. Dilation is a similar operation but the maximum pixel value is used instead.
The code to detect vertical and horizontal lines are shown below along with their results in Figure 5.
#machine-learning #data-science #data analysis
This article is part 2 in my series detailing the use and development of SAEMI, a web application I created to perform high throughput quantitative analysis of electron microscopy images. Check out part 1 here (where I go over the motivation behind the app) and part 3 here (where I go over how I trained my image segmentation model). You can also check out the app here and its Github here. In this article, I give a walk-through of how to use the app and obtain the best results.
To start off this article, I am going to assume that you have either seen/taken an electron microscopy (EM) image before or at the very least are familiar with electron microscopes. If not, please check out part 1 in my series where I detail the motivations behind developing this app. Let’s assume, though, that you are a researcher who has taken some EM measurements and now would like to perform some quantitative analysis on your images. More specifically, you would like to determine the mean size of the particles in your image and the standard deviation of that distribution. As an example, let’s say you have an EM image like the one seen in Figure 1 below.
The first thing to note here is the banner that is displayed along the bottom portion of the image which contains information about the measurement such as the scale bar, its electron high tension (EHT) and its magnification. Many electron microscopes and its accompanying software will add this kind of information (at minimum there will be a scale bar) to the image. Unfortunately, since this is part of the image itself, leaving it in may affect the resulting segmentation from the deep learning model.
In order to reduce the potential for error, the additional “meta-information” should be removed from the image whenever possible. Take care to keep a record of the scale bar elsewhere, however, as it will be needed to convert the final calculation from pixels to a physical size in the final step. For now, the “meta-information” can be removed through a number of different methods.
The simplest method and the one I would personally recommend is to just crop it out of the image. It has the least potential to introduce further artifacts and requires the least technical prowess in image processing to perform. The only downside is that you may be losing important data by cropping the image.
In the interest of presenting other options, you can also use some more involved image processing techniques to remove the banner using either the open-cv or scikit-image libraries in Python. Some of these methods include using reflective padding, nearest neighbor padding, and constant padding. An example of all three methods are shown in Figure 2. The region where the banner used to be is highlighted in red in all three examples.
Fig. 2 a) example of removing the banner using reflective padding b) example of removing the banner by using nearest neighbor padding c) removing the banner by replacing it with a constant padding. source: CNR-IOM (CC-BY)
As can be seen, each of these methods can result in unintended artifacts to the image and it is up to you to decide how you would like to deal with the “meta-information”.
#electron-microscopy #data-science #machine-learning #deep-learning #data analysis
Codeigniter compress image size. In this tutorial, you will learn how to upload and compress image size in codeigniter app.
This tutorial will guide you step by step on how to compress image size and upload in codeigniter app. And you can easily compress any image type like png, jpg, jpeg, gif etc.
Follow the following steps and you can easily compress image size in codeigniter app:
#codeigniter compress image size #codeigniter compress image size example #codeigniter 4 compress image size #image resize in codeigniter
In this image validation in laravel 7/6, i will share with you how validate image and image file mime type like like jpeg, png, bmp, gif, svg, or webp before uploading image into database and server folder in laravel app.
#laravel image validation #image validation in laravel 7 #laravel image size validation #laravel image upload #laravel image validation max #laravel 6 image validation
Image validation in laravel. Here you will learn how to validate image and image mime type, size, and dimesion in laravel.
This tutorial will help you to validate image and image mime type like like jpeg, png, bmp, gif, svg, or webp before uploading to database and server folder in laravel app.
As well as, learn how to validate images mime type, file max size image dimension for image upload in laravel app.
This tutorial will guide you step by step to validate image in laravel with it’s size, mime type, and dimension in laravel app.
Follow the following steps and validate image mime type, size, and dimension before uploading to database and server folder in laravel app:
Step 1: Add routes
Step 2: Create Blade Views
Step 3: Add methods on Controller
Read full post https://www.tutsmake.com/image-validation-in-laravel/
#laravel image validation #image upload in laravel 7 #laravel image upload #image validation in laravel 6 #laravel+file upload validation #image size validation in laravel
Welcome to my Blog, in this article we learn about how to integrate CKEditor in Django and inside this, we enable the image upload button to add an image in the blog from local. When I add a CKEditor first time in my project then it was very difficult for me but now I can easily implement it in my project so you can learn and implement CKEditor in your project easily.
#django #add image upload in ckeditor #add image upload option ckeditor #ckeditor image upload #ckeditor image upload from local #how to add ckeditor in django #how to add image upload plugin in ckeditor #how to install ckeditor in django #how to integrate ckeditor in django #image upload in ckeditor #image upload option in ckeditor