Drones are revolutionising how professionals and amateurs generate video content for films, live events, AR/VR etc. Aerial cameras offer dynamic viewpoints compared to traditional devices. However, despite significant advancements in autonomous flight technology, creating expressive camera behaviors pose a challenge and requires non-technical users to edit a large number of unintuitive control parameters

Register for the upcoming Free ML Workshops

Recently, researchers from Facebook AI, Carnegie Mellon University and the University of Sao Paulo have developed a data-driven framework to edit complex camera positioning parameters in semantic space.

In a research paper, ‘Batteries, camera, action! Learning a semantic control space for expressive robot cinematography,’ co-authors Jessica Hodgins, Mustafa Mukadam, Sebastian Scherer, Rogerio Bonatti and Arthur Bucker explained various frameworks implemented in the process.

Semantic space control framework

For this, the researchers generated a database of clips with a diverse range of shots in a photo-realistic simulator, and used hundreds of participants in a crowdsourcing framework to obtain scores/ranks for a set of ‘semantic descriptors’ for each clip using machine learning models. The term ‘semantic descriptor’ is commonly used in computer vision which refers to a word or phrase that describes a given object.

Once the video scores are ready, the clips are analysed for correlations between descriptors, and semantic control space is built based on cinematography guidelines and human perception studies. It is then translated through a ‘generative model’ that can map a set of desired semantic video descriptors into low-level camera trajectory parameters.

This is followed by system evaluation to generate final shots rated by participants as per the expected degree of expression for each descriptor.

#opinions #aerial drones cinematography #drone machine learning #drone technology

How AI Enables Intuitive Camera Control For Drone Cinematography
1.05 GEEK