Drone Aerial View Segmentation

How to teach drone to see what is below and segment the object with high resolution

Introduction

Drone uses already gain popularity in the past few years, it provides high resolution images compare to satellite imagery_ with lower cost_, flexibility and low-flying altitude thus leading to increasing interest in the field or even it can _carry various sensor _such as magnetic sensor.

Image for post

Drone (Unsplash)

Teaching drone to see is quite challenges due to bird’s eye view and most of pre-trained models are trained in normal images we see (point of view) in daily basis (ImageNet, PASCAL VOC, COCO). In this project I want to experiment how to train drone datasets, the aims are:

  • Model that light weight (less parameters)
  • High score (I hope so)
  • Fast inference latency.

Datasets

_[2] The Semantic Drone Datasets focuses on semantic understanding of urban scenes for increasing the safety of autonomous drone flight and landing procedures. The imagery depicts more than 20 houses from nadir (bird’s eye) view acquired at an altitude of 5 to 30 meters above ground. A high resolution camera was used to acquire images at a size of 6000x4000px (24Mpx). The training set contains __400 publicly available images _and the test set is made up of 200 private images.

Image for post

Image for post

Sample Image from Datasets

The complexity of the datasets is limited to 20 classes (but actually it has 23 classes in its mask) as listed in the following: tree, grass, other vegetation, dirt, gravel, rocks, water, paved area, pool, person, dog, car, bicycle, roof, wall, fence, fence-pole, window, door, obstacle.


Methods

Preprocessing

resize the image with the same ratio like the original input to 704 x 1056, I don’t crop the image into patches because a few reasons, the object is not too small, it doesn’t take much memory and save my training time. I split the datasets into tree parts training(306), validation(54) and test(40) sets and applied HorizontalFlip, VerticalFlip, GridDistortion, Random Brightness Contrast and add GaussNoise into training data, with mini-batch size 3.

Model Architecture

I use 2 model architectures, I purposely use light model as backbone like MobileNet and EfficientNet for computational efficiency.

  • U-Net with MobileNet_V2 and EfficientNet-B3 as backbone
  • **FPN **(Feature Pyramid Network) with EfficientNet-B3 backbone

I seeParmar’s paper_[3] _for the model choices (I already trained different models before and these choices seem work)

Image for post

#computer-vision #remote-sensing #drones #deep-learning #deep learning

What is GEEK

Buddha Community

Drone Aerial View Segmentation

AI & ML Can Open Up Space For Drones To Demonstrate Full Potential

The demand for drones in the market has seen an incredible upsurge lately. As a matter of fact, the convenience of remote monitoring and its flexibility in harsh terrains has considerably increased the popularity of drone technology across segments like eCommerce, agriculture, warfare, to name a few, which was unthinkable a decade ago.

According to data, the current global market size of drone technology is about $14 billion and is expected to grow to $43 billion by the year 2024. The major chunk of this growth can be attributed to its significant usage in commercial deliveries. Currently, there are several drones start-up companies in India, which focus on the development, manufacturing, providing analytics platform for drone solutions. Some prominent names include Bangalore-based EDALL SYSTEMS and Skylark Drones and Delhi-based Atom Drones.

Having said that, while drones have seen a significant market boost, it comes with safety, security and privacy concerns, where it has been constantly scrutinised to be deployed for exploiting their security and privacy of individuals. Further, there were also cases where criminals, drug cartels and terrorists have used drones. To dig more in-depth on the drone technology and the involvement of artificial intelligence to enhance drone solutions, Analytics India Magazine spoke to Karthik Shankaran, the Chief Innovation Officer of Detroit Engineered Products (DEP).

To set the context — DEP is a product development house where customers focus strictly on drone development. While other companies work on the development and manufacturing of drones, DEP offers drone development as-a-service and solution offering, making it preferable for collaboration in solution development. With DEP, customers can get their product/solutions developed during any stage of drone development — from the conceptual phase to prototype and production. The company has the expertise and experience in the development of both rotor drones (Quadcopter) as well as fixed-wing drones.

Also Read: Leveraging Computer Vision In Drone Tech


#people #agricultural drones #ai drones #drones #drones in india #ai

Drone Aerial View Segmentation

How to teach drone to see what is below and segment the object with high resolution

Introduction

Drone uses already gain popularity in the past few years, it provides high resolution images compare to satellite imagery_ with lower cost_, flexibility and low-flying altitude thus leading to increasing interest in the field or even it can _carry various sensor _such as magnetic sensor.

Image for post

Drone (Unsplash)

Teaching drone to see is quite challenges due to bird’s eye view and most of pre-trained models are trained in normal images we see (point of view) in daily basis (ImageNet, PASCAL VOC, COCO). In this project I want to experiment how to train drone datasets, the aims are:

  • Model that light weight (less parameters)
  • High score (I hope so)
  • Fast inference latency.

Datasets

_[2] The Semantic Drone Datasets focuses on semantic understanding of urban scenes for increasing the safety of autonomous drone flight and landing procedures. The imagery depicts more than 20 houses from nadir (bird’s eye) view acquired at an altitude of 5 to 30 meters above ground. A high resolution camera was used to acquire images at a size of 6000x4000px (24Mpx). The training set contains __400 publicly available images _and the test set is made up of 200 private images.

Image for post

Image for post

Sample Image from Datasets

The complexity of the datasets is limited to 20 classes (but actually it has 23 classes in its mask) as listed in the following: tree, grass, other vegetation, dirt, gravel, rocks, water, paved area, pool, person, dog, car, bicycle, roof, wall, fence, fence-pole, window, door, obstacle.


Methods

Preprocessing

resize the image with the same ratio like the original input to 704 x 1056, I don’t crop the image into patches because a few reasons, the object is not too small, it doesn’t take much memory and save my training time. I split the datasets into tree parts training(306), validation(54) and test(40) sets and applied HorizontalFlip, VerticalFlip, GridDistortion, Random Brightness Contrast and add GaussNoise into training data, with mini-batch size 3.

Model Architecture

I use 2 model architectures, I purposely use light model as backbone like MobileNet and EfficientNet for computational efficiency.

  • U-Net with MobileNet_V2 and EfficientNet-B3 as backbone
  • **FPN **(Feature Pyramid Network) with EfficientNet-B3 backbone

I seeParmar’s paper_[3] _for the model choices (I already trained different models before and these choices seem work)

Image for post

#computer-vision #remote-sensing #drones #deep-learning #deep learning

Queenie  Davis

Queenie Davis

1623960960

How AI Enables Intuitive Camera Control For Drone Cinematography

Drones are revolutionising how professionals and amateurs generate video content for films, live events, AR/VR etc. Aerial cameras offer dynamic viewpoints compared to traditional devices. However, despite significant advancements in autonomous flight technology, creating expressive camera behaviors pose a challenge and requires non-technical users to edit a large number of unintuitive control parameters

Register for the upcoming Free ML Workshops

Recently, researchers from Facebook AI, Carnegie Mellon University and the University of Sao Paulo have developed a data-driven framework to edit complex camera positioning parameters in semantic space.

In a research paper, ‘Batteries, camera, action! Learning a semantic control space for expressive robot cinematography,’ co-authors Jessica Hodgins, Mustafa Mukadam, Sebastian Scherer, Rogerio Bonatti and Arthur Bucker explained various frameworks implemented in the process.

Semantic space control framework

For this, the researchers generated a database of clips with a diverse range of shots in a photo-realistic simulator, and used hundreds of participants in a crowdsourcing framework to obtain scores/ranks for a set of ‘semantic descriptors’ for each clip using machine learning models. The term ‘semantic descriptor’ is commonly used in computer vision which refers to a word or phrase that describes a given object.

Once the video scores are ready, the clips are analysed for correlations between descriptors, and semantic control space is built based on cinematography guidelines and human perception studies. It is then translated through a ‘generative model’ that can map a set of desired semantic video descriptors into low-level camera trajectory parameters.

This is followed by system evaluation to generate final shots rated by participants as per the expected degree of expression for each descriptor.

#opinions #aerial drones cinematography #drone machine learning #drone technology

Dominic  Feeney

Dominic Feeney

1621242214

Semantic Segmentation with TensorFlow Keras - Analytics India Magazine

(https://analyticsindiamag.com/google-arts-culture-uses-ai-to-preserve-endangered-languages/)

Semantic Segmentation laid down the fundamental path to advanced Computer Vision tasks such as object detectionshape recognitionautonomous drivingrobotics, and virtual reality. Semantic segmentation can be defined as the process of pixel-level image classification into two or more Object classes. It differs from image classification entirely, as the latter performs image-level classification. For instance, consider an image that consists mainly of a zebra, surrounded by grass fields, a tree and a flying bird. Image classification tells us that the image belongs to the ‘zebra’ class. It can not tell where the zebra is or what its size or pose is. But, semantic segmentation of that image may tell that there is a zebra, grass field, a bird and a tree in the given image (classifies parts of an image into separate classes). And it tells us which pixels in the image belong to which class.

In this article, we discuss semantic segmentation using TensorFlow Keras. Readers are expected to have a fundamental knowledge of deep learning, image classification and transfer learning. Nevertheless, the following articles might fulfil these prerequisites with a quick and clear understanding:

  1. Getting Started With Deep Learning Using TensorFlow Keras
  2. Getting Started With Computer Vision Using TensorFlow Keras
  3. Exploring Transfer Learning Using TensorFlow Keras

Let’s dive deeper into hands-on learning.

#developers corner #densenet #image classification #keras #object detection #object segmentation #pix2pix #segmentation #semantic segmentation #tensorflow #tensorflow 2.0 #unet

bibiana roy

1605722131

Shadow X Drone Review - EXPERIENCE YOUR ADVENTURE WITH DRONE

Drones are the next-level innovation-based devices that are best for entertainment and different purposes.

They are technically called automated aerial vehicles that fly without the help of humans on the vehicle. Drones have found a special place in entertainment, videography, and defense services.

Securing the best of the Shadow X Drone category, this foldable and portable drone has advanced features and performance for a great experience. It is now the fastest-selling drone over the internet. You can choose from any of the latest drones available in the market.

Let us have a look at the most recent and affordable drone called Shadow X Drone.

#drone review #shadow x drone review #shadow x drone