Moment in Time is one of the biggest human-commented video datasets catching visual and discernible short occasions created by people, creatures, articles and nature. It was developed in 2018 by the researchers: Mathew Monfort, Alex Andonian, Bolei Zhou and Kandan Ramakrishnan. The dataset comprises more than 1,000,000 3-second recordings relating to 339 unique action words. Every action word is related to more than 1,000 recordings bringing about a huge adjusted dataset for taking in powerful occasions from recordings. The various day to day activities associated with this dataset includes falling on the floor, the opening of the mouth, eye, swimming, bouncing etc.

Here, we will examine the information contained in this dataset, how it was assembled, and give some benchmark models that gave high exactness on this dataset. Further, we will execute the Moment in time dataset utilizing Pytorch and Tensorflow Library.

#human motion dataset #moment in time #pytorch #tensorflow #video dataset

Moment in Time: The Biggest Short Video Dataset For Data Scientists
1.35 GEEK