Drone Aerial View Segmentation

How to teach drone to see what is below and segment the object with high resolution

Introduction

Drone uses already gain popularity in the past few years, it provides high resolution images compare to satellite imagery_ with lower cost_, flexibility and low-flying altitude thus leading to increasing interest in the field or even it can _carry various sensor _such as magnetic sensor.

Image for post

Drone (Unsplash)

Teaching drone to see is quite challenges due to bird’s eye view and most of pre-trained models are trained in normal images we see (point of view) in daily basis (ImageNet, PASCAL VOC, COCO). In this project I want to experiment how to train drone datasets, the aims are:

  • Model that light weight (less parameters)
  • High score (I hope so)
  • Fast inference latency.

Datasets

_[2] The Semantic Drone Datasets focuses on semantic understanding of urban scenes for increasing the safety of autonomous drone flight and landing procedures. The imagery depicts more than 20 houses from nadir (bird’s eye) view acquired at an altitude of 5 to 30 meters above ground. A high resolution camera was used to acquire images at a size of 6000x4000px (24Mpx). The training set contains __400 publicly available images _and the test set is made up of 200 private images.

Image for post

Image for post

Sample Image from Datasets

The complexity of the datasets is limited to 20 classes (but actually it has 23 classes in its mask) as listed in the following: tree, grass, other vegetation, dirt, gravel, rocks, water, paved area, pool, person, dog, car, bicycle, roof, wall, fence, fence-pole, window, door, obstacle.


Methods

Preprocessing

resize the image with the same ratio like the original input to 704 x 1056, I don’t crop the image into patches because a few reasons, the object is not too small, it doesn’t take much memory and save my training time. I split the datasets into tree parts training(306), validation(54) and test(40) sets and applied HorizontalFlip, VerticalFlip, GridDistortion, Random Brightness Contrast and add GaussNoise into training data, with mini-batch size 3.

Model Architecture

I use 2 model architectures, I purposely use light model as backbone like MobileNet and EfficientNet for computational efficiency.

  • U-Net with MobileNet_V2 and EfficientNet-B3 as backbone
  • **FPN **(Feature Pyramid Network) with EfficientNet-B3 backbone

I seeParmar’s paper_[3] _for the model choices (I already trained different models before and these choices seem work)

Image for post

#computer-vision #remote-sensing #drones #deep-learning #deep learning

What is GEEK

Buddha Community

Drone Aerial View Segmentation

AI & ML Can Open Up Space For Drones To Demonstrate Full Potential

The demand for drones in the market has seen an incredible upsurge lately. As a matter of fact, the convenience of remote monitoring and its flexibility in harsh terrains has considerably increased the popularity of drone technology across segments like eCommerce, agriculture, warfare, to name a few, which was unthinkable a decade ago.

According to data, the current global market size of drone technology is about $14 billion and is expected to grow to $43 billion by the year 2024. The major chunk of this growth can be attributed to its significant usage in commercial deliveries. Currently, there are several drones start-up companies in India, which focus on the development, manufacturing, providing analytics platform for drone solutions. Some prominent names include Bangalore-based EDALL SYSTEMS and Skylark Drones and Delhi-based Atom Drones.

Having said that, while drones have seen a significant market boost, it comes with safety, security and privacy concerns, where it has been constantly scrutinised to be deployed for exploiting their security and privacy of individuals. Further, there were also cases where criminals, drug cartels and terrorists have used drones. To dig more in-depth on the drone technology and the involvement of artificial intelligence to enhance drone solutions, Analytics India Magazine spoke to Karthik Shankaran, the Chief Innovation Officer of Detroit Engineered Products (DEP).

To set the context — DEP is a product development house where customers focus strictly on drone development. While other companies work on the development and manufacturing of drones, DEP offers drone development as-a-service and solution offering, making it preferable for collaboration in solution development. With DEP, customers can get their product/solutions developed during any stage of drone development — from the conceptual phase to prototype and production. The company has the expertise and experience in the development of both rotor drones (Quadcopter) as well as fixed-wing drones.

Also Read: Leveraging Computer Vision In Drone Tech


#people #agricultural drones #ai drones #drones #drones in india #ai

Drone Aerial View Segmentation

How to teach drone to see what is below and segment the object with high resolution

Introduction

Drone uses already gain popularity in the past few years, it provides high resolution images compare to satellite imagery_ with lower cost_, flexibility and low-flying altitude thus leading to increasing interest in the field or even it can _carry various sensor _such as magnetic sensor.

Image for post

Drone (Unsplash)

Teaching drone to see is quite challenges due to bird’s eye view and most of pre-trained models are trained in normal images we see (point of view) in daily basis (ImageNet, PASCAL VOC, COCO). In this project I want to experiment how to train drone datasets, the aims are:

  • Model that light weight (less parameters)
  • High score (I hope so)
  • Fast inference latency.

Datasets

_[2] The Semantic Drone Datasets focuses on semantic understanding of urban scenes for increasing the safety of autonomous drone flight and landing procedures. The imagery depicts more than 20 houses from nadir (bird’s eye) view acquired at an altitude of 5 to 30 meters above ground. A high resolution camera was used to acquire images at a size of 6000x4000px (24Mpx). The training set contains __400 publicly available images _and the test set is made up of 200 private images.

Image for post

Image for post

Sample Image from Datasets

The complexity of the datasets is limited to 20 classes (but actually it has 23 classes in its mask) as listed in the following: tree, grass, other vegetation, dirt, gravel, rocks, water, paved area, pool, person, dog, car, bicycle, roof, wall, fence, fence-pole, window, door, obstacle.


Methods

Preprocessing

resize the image with the same ratio like the original input to 704 x 1056, I don’t crop the image into patches because a few reasons, the object is not too small, it doesn’t take much memory and save my training time. I split the datasets into tree parts training(306), validation(54) and test(40) sets and applied HorizontalFlip, VerticalFlip, GridDistortion, Random Brightness Contrast and add GaussNoise into training data, with mini-batch size 3.

Model Architecture

I use 2 model architectures, I purposely use light model as backbone like MobileNet and EfficientNet for computational efficiency.

  • U-Net with MobileNet_V2 and EfficientNet-B3 as backbone
  • **FPN **(Feature Pyramid Network) with EfficientNet-B3 backbone

I seeParmar’s paper_[3] _for the model choices (I already trained different models before and these choices seem work)

Image for post

#computer-vision #remote-sensing #drones #deep-learning #deep learning

Queenie  Davis

Queenie Davis

1623960960

How AI Enables Intuitive Camera Control For Drone Cinematography

Drones are revolutionising how professionals and amateurs generate video content for films, live events, AR/VR etc. Aerial cameras offer dynamic viewpoints compared to traditional devices. However, despite significant advancements in autonomous flight technology, creating expressive camera behaviors pose a challenge and requires non-technical users to edit a large number of unintuitive control parameters

Register for the upcoming Free ML Workshops

Recently, researchers from Facebook AI, Carnegie Mellon University and the University of Sao Paulo have developed a data-driven framework to edit complex camera positioning parameters in semantic space.

In a research paper, ‘Batteries, camera, action! Learning a semantic control space for expressive robot cinematography,’ co-authors Jessica Hodgins, Mustafa Mukadam, Sebastian Scherer, Rogerio Bonatti and Arthur Bucker explained various frameworks implemented in the process.

Semantic space control framework

For this, the researchers generated a database of clips with a diverse range of shots in a photo-realistic simulator, and used hundreds of participants in a crowdsourcing framework to obtain scores/ranks for a set of ‘semantic descriptors’ for each clip using machine learning models. The term ‘semantic descriptor’ is commonly used in computer vision which refers to a word or phrase that describes a given object.

Once the video scores are ready, the clips are analysed for correlations between descriptors, and semantic control space is built based on cinematography guidelines and human perception studies. It is then translated through a ‘generative model’ that can map a set of desired semantic video descriptors into low-level camera trajectory parameters.

This is followed by system evaluation to generate final shots rated by participants as per the expected degree of expression for each descriptor.

#opinions #aerial drones cinematography #drone machine learning #drone technology

Harsha  Shirali

Harsha Shirali

1669174554

Replace Elements in Python NumPy Array with Example

In this article, we will learn how to replace elements in Python NumPy Array. To replace elements in Python NumPy Array, We can follow the following examples.

Example 1 : Replace Elements Equal to Some Value

The following code shows how to replace all elements in the NumPy array equal to 8 with a new value of 20:

#replace all elements equal to 8 with 20
my_array[my_array == 8] = 20

#view updated array
print(my_array)

[ 4  5  5  7 20 20  9 12]

Example 2: Replace Elements Based on One Condition

The following code shows how to replace all elements in the NumPy array greater than 8 with a new value of 20:

#replace all elements greater than 8 with 20
my_array[my_array > 8] = 20

#view updated array
print(my_array)

[ 4  5  5  7  8  8 20 20]

Example 3: Replace Elements Based on Multiple Conditions

The following code shows how to replace all elements in the NumPy array greater than 8 or less than 6 with a new value of 20:

#replace all elements greater than 8 or less than 6 with a new value of 20
my_array[(my_array > 8) | (my_array < 6)] = 20

#view updated array
print(my_array)

[20 20 20  7  8  8 20 20]

#python 
 

Monotone: An Unsplash Application for IOS

Monotone

Monotone is a Modern Mobile Application, integrated with powerful Unsplash API provided by Unsplash. It implemented almost all features including viewing, searching, collecting photos. And other features, such as profile, license, FAQ are supported as well.

This is an un-official application, exploring the feasibility of some conceptions is the goal of this project. Written in Swift, triggered by RxSwift, draw responsive constraints using SnapKit.

If you like this project or inspired by any ideas of this project, please star it without any hesitation. (ヽ(✿゚▽゚)ノ)

Overview

screen-record-1.gif screen-record-2.gif



 

Development Progress

Features

  •  Write Interfaces Programmatically
  •  Dark Mode Support
  •  Animation Effects
  •  Localization
  •  Powered by Unsplash API
  •  More...

Tasks

Currently supported tasks:

PositionModulePageStyle & LayoutPowered by DataAnimation EffectsLocalization
MainLoginSign Up & Sign In
PhotoList (Search & Topic)
View
Camera Settings
Collect (Add & Remove)
Share to SNS⬜️
Save to Album
Side MenuProfileDetails
MenuMy Photos
Hiring⬜️
Licenses
Help⬜️
Made with Unsplash⬜️
Tab BarStoreHome⬜️
Details⬜️
WallpaperList (Adapt Screen Size)⬜️
CollectionList
ExploreList (Photo & Collection)⬜️



 

Getting Started

This application uses Cocoapods to manage dependencies. Please refer to Cocoapods Offical Website to install & configure(If you already installed Cocoapods, skip this).

Prerequisites

Monotone is trigged by Unsplash API . The very first thing must be done is applying a pair of OAuth key to run it.

  1. Visit Unsplash, sign up then sign in.(If you already have an account, skip this).
  2. Visit Unsplash Application Registration Platform agree with terms and create a new application, the application name and description can be anything.
  3. After the application was created,it will redirect to the application details page automatically (Also can be found from https://unsplash.com/oauth/applications). At Redirect URI & Permissions - Redirect URI section, input monotone://unsplash, make sure all authentication options are checked, just like the image shown below.

  1. After the work is finished, check ”Access Key“ and ”Secret Key“ on this page, they will be used soon.

Installation

  1. Execute the following commands in the terminal:
# Clone to a local folder
git clone https://github.com/Neko3000/Monotone.git

# Direct to Project folder
cd Monotone

# Install Pods
pod install
  1. Under Monotone folder, duplicate config_debug.json file,and rename it to config.json(This file is ignored by .gitignore);
  2. Open config.json ,input your ”Access Key“ and ”Secret Key“,they will be copyed to APP folder when running.(For more information, please refer to the content in Project->Build Phases->Run Script and APPCredential.swift );
  3. Done,command + R。

     

Dependencies

ProjectDescription
RxSwiftFramework for Reactive Async Programming.
ActionBased on RxSwift,encapsulate actions for calling。
DataSourcesBased on RxSwift,extend logic interaction of tableview and collectionview。
AlamofireHTTP network library.
SwiftyJSONHandle JSON format data effectively.
ObjectMapperMap data between models and JSON.
KingfisherNetwork image cache libray with many functions.
SnapKitMake constraints effectively.
......

For more information,please check Podfile

Project Structure

The basic structure of this project.

Monotone 
├── Monotone
│   ├── /Vars  #Global Variables
│   ├── /Enums  #Enums (Includes some dummy data)
│   ├── /Application
│   │   ├── AppCredential  #Authentication Credential
│   │   ...
│   │   └── UserManager  #User Managment
│   ├── /Utils  #Utils
│   │   ├── /BlurHash  #Photo Hash
│   │   ├── ColorPalette  #Global Colors
│   │   ├── AnimatorTrigger  #Animation Effects
│   │   └── MessageCenter  #Message Notification
│   │── /Extension  #Extensions
│   │── /Services  #Services
│   │   ├── /Authentication  #Requests of Authentication
│   │   └── /Network  #Requesets of Data
│   │── /Components  #View Classes
│   │── /ViewModels  #View Models
│   │── /ViewControllers  #View Controllers
│   │── /Models  #Data Models
│   │── /Coordinators  #Segues
│   └── /Resource  #Resource
└── Pods


Designing

The interface you are seeing are all designed by Addie Design Co. They shared this document, everyone can free download it and use it. Those design elements and their level of completion are astonishing. This application would not be here without this design document.

Thanks again to Addie Design Co and this beautiful design document.



 

About Unsplash

Unsplash is a website dedicated to sharing high-quality stock photography under the Unsplash license. All photos uploaded by photographers will be organized and archived by editors.

And this website is one of my favorites, admired for its artistic, the spirit of sharing.
You will find my home page here. (Not updated frequently since 2020)



 

Contributing

Limited by data Unsplash API provides, some parts of this application only finished their styles and layouts(Almost in store, explore, etc). If the API provides more detailed data on these parts in the future, we will add new features as soon as possible.

Meanwhile, focusing on the current application, we will improve it continuously.

How to Participate in

If you are an experienced mobile application developer and want to improve this application. You are welcomed to participate in this open-source project. Practice your ideas, improve even refactor this application.

Follow standard steps:

  1. Fork this repo;
  2. Create your new Branch (git checkout -b feature/AmazingFeature);
  3. Add Commit (git commit -m 'Add some AmazingFeature');
  4. Push to remote Branch (git push origin feature/AmazingFeature);
  5. Open a Pull Request.

For anyone, open an issue if you find any problems. PRs are welcome.


Author: Neko3000
Source code: https://github.com/Neko3000/Monotone
License: MIT license

#swift