1596016800
How to teach drone to see what is below and segment the object with high resolution
Drone uses already gain popularity in the past few years, it provides high resolution images compare to satellite imagery_ with lower cost_, flexibility and low-flying altitude thus leading to increasing interest in the field or even it can _carry various sensor _such as magnetic sensor.
Drone (Unsplash)
Teaching drone to see is quite challenges due to bird’s eye view and most of pre-trained models are trained in normal images we see (point of view) in daily basis (ImageNet, PASCAL VOC, COCO). In this project I want to experiment how to train drone datasets, the aims are:
_[2] The Semantic Drone Datasets focuses on semantic understanding of urban scenes for increasing the safety of autonomous drone flight and landing procedures. The imagery depicts more than 20 houses from nadir (bird’s eye) view acquired at an altitude of 5 to 30 meters above ground. A high resolution camera was used to acquire images at a size of 6000x4000px (24Mpx). The training set contains __400 publicly available images _and the test set is made up of 200 private images.
Sample Image from Datasets
The complexity of the datasets is limited to 20 classes (but actually it has 23 classes in its mask) as listed in the following: tree, grass, other vegetation, dirt, gravel, rocks, water, paved area, pool, person, dog, car, bicycle, roof, wall, fence, fence-pole, window, door, obstacle.
Preprocessing
I resize the image with the same ratio like the original input to 704 x 1056, I don’t crop the image into patches because a few reasons, the object is not too small, it doesn’t take much memory and save my training time. I split the datasets into tree parts training(306), validation(54) and test(40) sets and applied HorizontalFlip, VerticalFlip, GridDistortion, Random Brightness Contrast and add GaussNoise into training data, with mini-batch size 3.
Model Architecture
I use 2 model architectures, I purposely use light model as backbone like MobileNet and EfficientNet for computational efficiency.
I seeParmar’s paper_[3] _for the model choices (I already trained different models before and these choices seem work)
#computer-vision #remote-sensing #drones #deep-learning #deep learning
1599537480
The demand for drones in the market has seen an incredible upsurge lately. As a matter of fact, the convenience of remote monitoring and its flexibility in harsh terrains has considerably increased the popularity of drone technology across segments like eCommerce, agriculture, warfare, to name a few, which was unthinkable a decade ago.
According to data, the current global market size of drone technology is about $14 billion and is expected to grow to $43 billion by the year 2024. The major chunk of this growth can be attributed to its significant usage in commercial deliveries. Currently, there are several drones start-up companies in India, which focus on the development, manufacturing, providing analytics platform for drone solutions. Some prominent names include Bangalore-based EDALL SYSTEMS and Skylark Drones and Delhi-based Atom Drones.
Having said that, while drones have seen a significant market boost, it comes with safety, security and privacy concerns, where it has been constantly scrutinised to be deployed for exploiting their security and privacy of individuals. Further, there were also cases where criminals, drug cartels and terrorists have used drones. To dig more in-depth on the drone technology and the involvement of artificial intelligence to enhance drone solutions, Analytics India Magazine spoke to Karthik Shankaran, the Chief Innovation Officer of Detroit Engineered Products (DEP).
To set the context — DEP is a product development house where customers focus strictly on drone development. While other companies work on the development and manufacturing of drones, DEP offers drone development as-a-service and solution offering, making it preferable for collaboration in solution development. With DEP, customers can get their product/solutions developed during any stage of drone development — from the conceptual phase to prototype and production. The company has the expertise and experience in the development of both rotor drones (Quadcopter) as well as fixed-wing drones.
Also Read: Leveraging Computer Vision In Drone Tech
#people #agricultural drones #ai drones #drones #drones in india #ai
1596016800
How to teach drone to see what is below and segment the object with high resolution
Drone uses already gain popularity in the past few years, it provides high resolution images compare to satellite imagery_ with lower cost_, flexibility and low-flying altitude thus leading to increasing interest in the field or even it can _carry various sensor _such as magnetic sensor.
Drone (Unsplash)
Teaching drone to see is quite challenges due to bird’s eye view and most of pre-trained models are trained in normal images we see (point of view) in daily basis (ImageNet, PASCAL VOC, COCO). In this project I want to experiment how to train drone datasets, the aims are:
_[2] The Semantic Drone Datasets focuses on semantic understanding of urban scenes for increasing the safety of autonomous drone flight and landing procedures. The imagery depicts more than 20 houses from nadir (bird’s eye) view acquired at an altitude of 5 to 30 meters above ground. A high resolution camera was used to acquire images at a size of 6000x4000px (24Mpx). The training set contains __400 publicly available images _and the test set is made up of 200 private images.
Sample Image from Datasets
The complexity of the datasets is limited to 20 classes (but actually it has 23 classes in its mask) as listed in the following: tree, grass, other vegetation, dirt, gravel, rocks, water, paved area, pool, person, dog, car, bicycle, roof, wall, fence, fence-pole, window, door, obstacle.
Preprocessing
I resize the image with the same ratio like the original input to 704 x 1056, I don’t crop the image into patches because a few reasons, the object is not too small, it doesn’t take much memory and save my training time. I split the datasets into tree parts training(306), validation(54) and test(40) sets and applied HorizontalFlip, VerticalFlip, GridDistortion, Random Brightness Contrast and add GaussNoise into training data, with mini-batch size 3.
Model Architecture
I use 2 model architectures, I purposely use light model as backbone like MobileNet and EfficientNet for computational efficiency.
I seeParmar’s paper_[3] _for the model choices (I already trained different models before and these choices seem work)
#computer-vision #remote-sensing #drones #deep-learning #deep learning
1623960960
Drones are revolutionising how professionals and amateurs generate video content for films, live events, AR/VR etc. Aerial cameras offer dynamic viewpoints compared to traditional devices. However, despite significant advancements in autonomous flight technology, creating expressive camera behaviors pose a challenge and requires non-technical users to edit a large number of unintuitive control parameters
Register for the upcoming Free ML Workshops
Recently, researchers from Facebook AI, Carnegie Mellon University and the University of Sao Paulo have developed a data-driven framework to edit complex camera positioning parameters in semantic space.
In a research paper, ‘Batteries, camera, action! Learning a semantic control space for expressive robot cinematography,’ co-authors Jessica Hodgins, Mustafa Mukadam, Sebastian Scherer, Rogerio Bonatti and Arthur Bucker explained various frameworks implemented in the process.
Semantic space control framework
For this, the researchers generated a database of clips with a diverse range of shots in a photo-realistic simulator, and used hundreds of participants in a crowdsourcing framework to obtain scores/ranks for a set of ‘semantic descriptors’ for each clip using machine learning models. The term ‘semantic descriptor’ is commonly used in computer vision which refers to a word or phrase that describes a given object.
Once the video scores are ready, the clips are analysed for correlations between descriptors, and semantic control space is built based on cinematography guidelines and human perception studies. It is then translated through a ‘generative model’ that can map a set of desired semantic video descriptors into low-level camera trajectory parameters.
This is followed by system evaluation to generate final shots rated by participants as per the expected degree of expression for each descriptor.
#opinions #aerial drones cinematography #drone machine learning #drone technology
1669174554
In this article, we will learn how to replace elements in Python NumPy Array. To replace elements in Python NumPy Array, We can follow the following examples.
The following code shows how to replace all elements in the NumPy array equal to 8 with a new value of 20:
#replace all elements equal to 8 with 20
my_array[my_array == 8] = 20
#view updated array
print(my_array)
[ 4 5 5 7 20 20 9 12]
The following code shows how to replace all elements in the NumPy array greater than 8 with a new value of 20:
#replace all elements greater than 8 with 20
my_array[my_array > 8] = 20
#view updated array
print(my_array)
[ 4 5 5 7 8 8 20 20]
The following code shows how to replace all elements in the NumPy array greater than 8 or less than 6 with a new value of 20:
#replace all elements greater than 8 or less than 6 with a new value of 20
my_array[(my_array > 8) | (my_array < 6)] = 20
#view updated array
print(my_array)
[20 20 20 7 8 8 20 20]
1655906400
Monotone is a Modern Mobile Application, integrated with powerful Unsplash API provided by Unsplash. It implemented almost all features including viewing, searching, collecting photos. And other features, such as profile, license, FAQ are supported as well.
This is an un-official application, exploring the feasibility of some conceptions is the goal of this project. Written in Swift, triggered by RxSwift, draw responsive constraints using SnapKit.
If you like this project or inspired by any ideas of this project, please star it without any hesitation. (ヽ(✿゚▽゚)ノ)
Currently supported tasks:
Position | Module | Page | Style & Layout | Powered by Data | Animation Effects | Localization |
---|---|---|---|---|---|---|
Main | Login | Sign Up & Sign In | ✅ | ✅ | ✅ | ✅ |
Photo | List (Search & Topic) | ✅ | ✅ | ✅ | ✅ | |
View | ✅ | ✅ | ✅ | ✅ | ||
Camera Settings | ✅ | ✅ | ✅ | ✅ | ||
Collect (Add & Remove) | ✅ | ✅ | ✅ | ✅ | ||
Share to SNS | ✅ | ⬜️ | ✅ | ✅ | ||
Save to Album | ✅ | ✅ | ✅ | ✅ | ||
Side Menu | Profile | Details | ✅ | ✅ | ✅ | ✅ |
Menu | My Photos | ✅ | ✅ | ✅ | ✅ | |
Hiring | ✅ | ⬜️ | ✅ | ✅ | ||
Licenses | ✅ | ✅ | ✅ | ✅ | ||
Help | ✅ | ⬜️ | ✅ | ✅ | ||
Made with Unsplash | ✅ | ⬜️ | ✅ | ✅ | ||
Tab Bar | Store | Home | ✅ | ⬜️ | ✅ | ✅ |
Details | ✅ | ⬜️ | ✅ | ✅ | ||
Wallpaper | List (Adapt Screen Size) | ✅ | ⬜️ | ✅ | ✅ | |
Collection | List | ✅ | ✅ | ✅ | ✅ | |
Explore | List (Photo & Collection) | ✅ | ⬜️ | ✅ | ✅ |
This application uses Cocoapods
to manage dependencies. Please refer to Cocoapods Offical Website to install & configure(If you already installed Cocoapods
, skip this).
Monotone is trigged by Unsplash API . The very first thing must be done is applying a pair of OAuth key to run it.
Redirect URI & Permissions - Redirect URI
section, input monotone://unsplash
, make sure all authentication options are checked, just like the image shown below.# Clone to a local folder
git clone https://github.com/Neko3000/Monotone.git
# Direct to Project folder
cd Monotone
# Install Pods
pod install
config_debug.json
file,and rename it to config.json
(This file is ignored by .gitignore);config.json
,input your ”Access Key“ and ”Secret Key“,they will be copyed to APP folder when running.(For more information, please refer to the content in Project->Build Phases->Run Script and APPCredential.swift );Project | Description |
---|---|
RxSwift | Framework for Reactive Async Programming. |
Action | Based on RxSwift,encapsulate actions for calling。 |
DataSources | Based on RxSwift,extend logic interaction of tableview and collectionview。 |
Alamofire | HTTP network library. |
SwiftyJSON | Handle JSON format data effectively. |
ObjectMapper | Map data between models and JSON. |
Kingfisher | Network image cache libray with many functions. |
SnapKit | Make constraints effectively. |
... | ... |
For more information,please check Podfile。
The basic structure of this project.
Monotone
├── Monotone
│ ├── /Vars #Global Variables
│ ├── /Enums #Enums (Includes some dummy data)
│ ├── /Application
│ │ ├── AppCredential #Authentication Credential
│ │ ...
│ │ └── UserManager #User Managment
│ ├── /Utils #Utils
│ │ ├── /BlurHash #Photo Hash
│ │ ├── ColorPalette #Global Colors
│ │ ├── AnimatorTrigger #Animation Effects
│ │ └── MessageCenter #Message Notification
│ │── /Extension #Extensions
│ │── /Services #Services
│ │ ├── /Authentication #Requests of Authentication
│ │ └── /Network #Requesets of Data
│ │── /Components #View Classes
│ │── /ViewModels #View Models
│ │── /ViewControllers #View Controllers
│ │── /Models #Data Models
│ │── /Coordinators #Segues
│ └── /Resource #Resource
└── Pods
Designing
The interface you are seeing are all designed by Addie Design Co. They shared this document, everyone can free download it and use it. Those design elements and their level of completion are astonishing. This application would not be here without this design document.
Thanks again to Addie Design Co and this beautiful design document.
Unsplash is a website dedicated to sharing high-quality stock photography under the Unsplash license. All photos uploaded by photographers will be organized and archived by editors.
And this website is one of my favorites, admired for its artistic, the spirit of sharing.
You will find my home page here. (Not updated frequently since 2020)
Limited by data Unsplash API provides, some parts of this application only finished their styles and layouts(Almost in store, explore, etc). If the API provides more detailed data on these parts in the future, we will add new features as soon as possible.
Meanwhile, focusing on the current application, we will improve it continuously.
If you are an experienced mobile application developer and want to improve this application. You are welcomed to participate in this open-source project. Practice your ideas, improve even refactor this application.
Follow standard steps:
Fork
this repo;Branch
(git checkout -b feature/AmazingFeature
);Commit
(git commit -m 'Add some AmazingFeature'
);Push
to remote Branch
(git push origin feature/AmazingFeature
);Pull Request
.For anyone, open an issue if you find any problems. PRs are welcome.
Author: Neko3000
Source code: https://github.com/Neko3000/Monotone
License: MIT license