Deep Learning in Healthcare — X-Ray Imaging

This is part 2 of the application of Deep learning on X-Ray imaging. Here the focus will be on understanding X-ray images — with a special focus on Chest X-rays.

Interpreting Chest X-Rays:

Figure 1. Chest X-Ray — 1) Lungs, 2) Right Hemidiaphragm, 3) Left Hemidiaphragm, 4) Right Atrium, 5) Left Atrium (By Diego Grez — Radiografía_pulmones_Francisca_Lorca.jpg, CC BY-SA 3.0, https://commons.wikimedia.org/w/index.php?curid=10302947. Editing by Author)

X-ray images are grayscale images, that is, the images have some pixels which are dark and some that are bright. In medical imaging terms, these are images that have values ranging from 0 to 255, where 0 corresponds to the completely dark pixels, and 255 corresponds to the completely white pixels.

Figure 2. The grayscale bar

Different values on the X-ray image correlate to different areas of density:

  1. Dark — Locations on the body which are filled with Air are going to appear black.
  2. Dark Grey — Subcutaneous tissues or fat
  3. Light Grey — Soft tissues like the heart and blood vessels
  4. Off White — Bones such as the ribs
  5. Bright White — Presence of metallic objects such as pacemakers or defibrillators

The way physicians interpret an image is by looking at the borders between the different densities. As in Figure 1, the ribs appear off-white because they are dense tissues, but since the lungs are filled with air, the lungs appear dark. Similarly below the lung is the hemidiaphragm, which is again a soft tissue and hence appears light grey. This helps us give a clear understanding of the location and extent of the lungs.

So, if two objects with different densities are present close to each other they can be demarcated in an X-ray image.

Now if something were to happen in the lungs, such as Pneumonia, then, the air dense lungs will change into water-dense lungs. This will cause the demarcation lines to fade since the pixel densities will start closing in on the grayscale bar.

For taking a chest X-ray, normally the patient is asked to stand, and the X-rays are shot from either front to back (Anterior-Posterior) or from back to front (Posterior-Anterior).

#artificial-intelligence #machine-learning #x-rays #deep-learning #deep learning

What is GEEK

Buddha Community

Deep Learning in Healthcare — X-Ray Imaging

Deep Learning in Healthcare — X-Ray Imaging

This is part 2 of the application of Deep learning on X-Ray imaging. Here the focus will be on understanding X-ray images — with a special focus on Chest X-rays.

Interpreting Chest X-Rays:

Figure 1. Chest X-Ray — 1) Lungs, 2) Right Hemidiaphragm, 3) Left Hemidiaphragm, 4) Right Atrium, 5) Left Atrium (By Diego Grez — Radiografía_pulmones_Francisca_Lorca.jpg, CC BY-SA 3.0, https://commons.wikimedia.org/w/index.php?curid=10302947. Editing by Author)

X-ray images are grayscale images, that is, the images have some pixels which are dark and some that are bright. In medical imaging terms, these are images that have values ranging from 0 to 255, where 0 corresponds to the completely dark pixels, and 255 corresponds to the completely white pixels.

Figure 2. The grayscale bar

Different values on the X-ray image correlate to different areas of density:

  1. Dark — Locations on the body which are filled with Air are going to appear black.
  2. Dark Grey — Subcutaneous tissues or fat
  3. Light Grey — Soft tissues like the heart and blood vessels
  4. Off White — Bones such as the ribs
  5. Bright White — Presence of metallic objects such as pacemakers or defibrillators

The way physicians interpret an image is by looking at the borders between the different densities. As in Figure 1, the ribs appear off-white because they are dense tissues, but since the lungs are filled with air, the lungs appear dark. Similarly below the lung is the hemidiaphragm, which is again a soft tissue and hence appears light grey. This helps us give a clear understanding of the location and extent of the lungs.

So, if two objects with different densities are present close to each other they can be demarcated in an X-ray image.

Now if something were to happen in the lungs, such as Pneumonia, then, the air dense lungs will change into water-dense lungs. This will cause the demarcation lines to fade since the pixel densities will start closing in on the grayscale bar.

For taking a chest X-ray, normally the patient is asked to stand, and the X-rays are shot from either front to back (Anterior-Posterior) or from back to front (Posterior-Anterior).

#artificial-intelligence #machine-learning #x-rays #deep-learning #deep learning

Marget D

Marget D

1618317562

Top Deep Learning Development Services | Hire Deep Learning Developer

View more: https://www.inexture.com/services/deep-learning-development/

We at Inexture, strategically work on every project we are associated with. We propose a robust set of AI, ML, and DL consulting services. Our virtuoso team of data scientists and developers meticulously work on every project and add a personalized touch to it. Because we keep our clientele aware of everything being done associated with their project so there’s a sense of transparency being maintained. Leverage our services for your next AI project for end-to-end optimum services.

#deep learning development #deep learning framework #deep learning expert #deep learning ai #deep learning services

Mikel  Okuneva

Mikel Okuneva

1603735200

Top 10 Deep Learning Sessions To Look Forward To At DVDC 2020

The Deep Learning DevCon 2020, DLDC 2020, has exciting talks and sessions around the latest developments in the field of deep learning, that will not only be interesting for professionals of this field but also for the enthusiasts who are willing to make a career in the field of deep learning. The two-day conference scheduled for 29th and 30th October will host paper presentations, tech talks, workshops that will uncover some interesting developments as well as the latest research and advancement of this area. Further to this, with deep learning gaining massive traction, this conference will highlight some fascinating use cases across the world.

Here are ten interesting talks and sessions of DLDC 2020 that one should definitely attend:

Also Read: Why Deep Learning DevCon Comes At The Right Time


Adversarial Robustness in Deep Learning

By Dipanjan Sarkar

**About: **Adversarial Robustness in Deep Learning is a session presented by Dipanjan Sarkar, a Data Science Lead at Applied Materials, as well as a Google Developer Expert in Machine Learning. In this session, he will focus on the adversarial robustness in the field of deep learning, where he talks about its importance, different types of adversarial attacks, and will showcase some ways to train the neural networks with adversarial realisation. Considering abstract deep learning has brought us tremendous achievements in the fields of computer vision and natural language processing, this talk will be really interesting for people working in this area. With this session, the attendees will have a comprehensive understanding of adversarial perturbations in the field of deep learning and ways to deal with them with common recipes.

Read an interview with Dipanjan Sarkar.

Imbalance Handling with Combination of Deep Variational Autoencoder and NEATER

By Divye Singh

**About: **Imbalance Handling with Combination of Deep Variational Autoencoder and NEATER is a paper presentation by Divye Singh, who has a masters in technology degree in Mathematical Modeling and Simulation and has the interest to research in the field of artificial intelligence, learning-based systems, machine learning, etc. In this paper presentation, he will talk about the common problem of class imbalance in medical diagnosis and anomaly detection, and how the problem can be solved with a deep learning framework. The talk focuses on the paper, where he has proposed a synergistic over-sampling method generating informative synthetic minority class data by filtering the noise from the over-sampled examples. Further, he will also showcase the experimental results on several real-life imbalanced datasets to prove the effectiveness of the proposed method for binary classification problems.

Default Rate Prediction Models for Self-Employment in Korea using Ridge, Random Forest & Deep Neural Network

By Dongsuk Hong

About: This is a paper presentation given by Dongsuk Hong, who is a PhD in Computer Science, and works in the big data centre of Korea Credit Information Services. This talk will introduce the attendees with machine learning and deep learning models for predicting self-employment default rates using credit information. He will talk about the study, where the DNN model is implemented for two purposes — a sub-model for the selection of credit information variables; and works for cascading to the final model that predicts default rates. Hong’s main research area is data analysis of credit information, where she is particularly interested in evaluating the performance of prediction models based on machine learning and deep learning. This talk will be interesting for the deep learning practitioners who are willing to make a career in this field.


#opinions #attend dldc 2020 #deep learning #deep learning sessions #deep learning talks #dldc 2020 #top deep learning sessions at dldc 2020 #top deep learning talks at dldc 2020

Deep Learning in Healthcare — X-Ray Imaging

We saw in the previous part — Part 4— https://towardsdatascience.com/deep-learning-in-healthcare-x-ray-imaging-part-4-the-class-imbalance-problem-364eff4d47bb how to tackle the Class Imbalance Problem. In this section, we will focus on image normalization and data augmentation.


After the class imbalance problem is taken care of, next we look at ways to improve the performance of the neural network and also make it faster. We already have a similar number of images in the three classes in the training data— 1. Normal (no infection), 2. Bacterial Pneumonia, 3. Viral Pneumonia.

Image for post

Barchart of the number of images in each class- Image from Part 4 (Source: Image created by author)

Image Scaling/Normalization:

Neural networks work best when all the features are on the same scale. Similarly, optimization algorithms such as gradient descent work extremely well when the features are centered at mean zero with a standard deviation of one — i.e., the data has the properties of a standard normal distribution.

This can be done in several ways as shown below.

Case 1: Not recommended

scaled_dataset = (dataset - dataset_mean) / dataset_std_deviation

train, test = split(scaled_dataset)

The entire dataset is scaled and then split into train and test set.

Case 2: Not recommended

train, test = split(dataset)

scaled_train =  (train - train_mean) / train_std_deviation

scaled_test = (test - test_mean) / test_std_deviation

The dataset is split into train and test and then the training set and testing set are scaled separately.

Case 3: Recommended

train, test = split(dataset)

scaled_train =  (train - train_mean) / train_std_deviation

scaled_test = (test - train_mean) / train_std_deviation

The dataset is split into train and test set. Then the training images are scaled. And for scaling the test images, we use the mean and standard deviation of the training set, rather than of the test images.

It may look odd to use the mean and standard deviation of the training set to scale the test set, but Case 3 is the best method to follow. The reason is that:

The test data is ‘unseen data’ for the model, and we use the test data to check how the model performs with unseen data, that is, it gives a good estimate if the model is ready to be used in real-world scenarios.

Now, in a real-world scenario, we might not have a batch of test images to test our model on, but rather have a single image. In that case, it will not be possible to calculate mean and standard deviation on one single image. Also, in a case of multiple images, knowing the average per batch of test data would effectively give our model an advantage, and we don’t want the model to have any information about the test data.

So the best way to tackle this problem is to go with Case 3 and normalize incoming test data using the statistics computed from the training set. We will then use these statistics to transform our test data and any future data later on.


Data Augmentation:

Data augmentation is a strategy that enables practitioners to significantly increase the diversity of data available for training models, without actually collecting new data. [1]

In part 4 (https://towardsdatascience.com/deep-learning-in-healthcare-x-ray-imaging-part-4-the-class-imbalance-problem-364eff4d47bb) we have seen how we can create artificial images by using data augmentation. We used OpenCV to rotate, translate, flip, and blur images. Here we look into how data augmentation is done in Keras.

Advantages of data augmentation:

  1. Improving model results
  2. Prevent overfitting

To implement this we will be using the ImageDataGenerator class from the Keras framework. ImageDataGenerator helps to generate batches of tensor image data with real-time data augmentation. That is, it can carry out all these operations:

  1. Generate batches of images specified in a data frame.
  2. Allows basic data augmentation techniques such as flipping, zooming, scaling, rotating, etc.
  3. Transforming the values in each batch so that their mean is and their standard deviation is 1. This helps model training by standardizing the input distribution.
  4. It also converts the single-channel X-ray images (gray-scale) to a three-channel format by repeating the values in the image across all channels. We require this because the pre-trained model that we’ll use later to train the model requires three-channel inputs.

#towards-data-science #machine-learning #deep-learning #medicine #deep learning

Agnes  Sauer

Agnes Sauer

1596328500

All about images -Types of Images:

Everything we see around us is nothing but an Image. we capture them using our mobile camera. In Signal Processing terms, Image is a signal which conveys some information. First I will tell you about what is a signal? how many types are they? Later part of this blog I will tell you about the images.

We are saying that image is signal. Signals are carry some information. It may be useful information or random noise. In Mathematics, Signal is function which depends on independent variables. The variables which are responsible for the altering the signal are called independent Variables. we have multidimensional signals. Here you will know about only three types of signals which are mainly used in edge cutting techniques such as Image processing, Computer Vision, Machine Learning, Deep Learning.

  • 1D signal: Signals which has only one independent variable. Audio signals are the perfect example. It depends on the time. For instance, if you change time of an audio clip, you will listen sound at that particular time.
  • 2D signal: Signals which depends on two independent variables. Image is an 2D signal as its information is only depends on its length and width.
  • 3D signals : Signals which depends on three independent variables. Videos are the best examples for this. It is just motion of images with respect to time. Here image’s length and width are two independent variables and time is the third one.

Types of Images:

  • Analog Images: These are natural images. The images which we see with our eye all are Analog image such as all physical objects. It has continuous values. Its amplitude is infinite.
  • **Digital images: **By quantizing the analog images we can produce the digital images. But now-a-days, mostly all cameras produce digital images only. In digital Images, All values are discrete. Each location will have finite amplitude. Mostly we are using digital images for processing.

Image for post

Image for post

Every digital image will have group of pixels. Its coordinate system is starts from top coroner

Digital images contains stack of small rectangles. Each rectangle we call as Pixel. Pixel is the smallest unit in the image.Each Pixel will have particular value that is intensity. this intensity value is produced by the combination of colors. We have millions of colors. But our eye is perceive only three colors and their combinations. Those color we call primary colors i.e., Red, Green and Blue.

Image for post

Image for post

Why only those three colors ???

Do not think much. the reason is as our human eye has only three color receptors. Different combinations in the stimulation of the receptors enable the human eye to distinguish nearly 350000 colors

Lets move to our image topic:

As of now, we knew that image intensity values is combination of Red, Green and Blue. Each pixel in color image will have these three color channels. Generally, we represent each color value in 8 bits i.e., one byte.

Now, you can say how many bits will require at each pixel. We have 3 colors at each pixel and each color value will be stored in 8 bits. Then each pixel will have 24 bits. This 24 bit color image will display 2**24 different colors.

Now you have a question. how much memory does it require to store RGB image of shape 256*256 ???I think so explanation is not required, if you want to clear explanation please comment below.

#machine-learning #computer-vision #image-processing #deep-learning #image #deep learning