The dataset will allow more automatic sign language understanding and translation. These technologies could be applied to applications such as virtual assistants and robotics.

Artificial intelligence (AI) is helping humans save, parse, and learn language. With a new dataset, researchers and developers could get a massive boost developing technologies for the deaf community.

The How2Sign Dataset

The dataset includes over 80 hours of videos showing sign language interpreters translating a variety of tutorials. Amanda Duarte, a researcher in the Emerging Technologies for Artificial Intelligence group at the Barcelona Supercomputing Center (BSC), spent two years recording these videos and preparing the data.

Duarte also made use of Carnegie’s Carnegie Mellon University’s Panoptic Studio, a state-of-the-art dome-shaped studio that allowed researchers to video translators and reconstruct their movements in 3D.

Thanks to Duarte, How2Sign provides a public resource for researchers in natural language processing and computer vision, helping usher in a new era of deaf and hard of hearing enabled products and services. Making the internet more accessible is a huge goal, and one of the first applications is software that transfers signs from one user to another.

The dataset provides a valuable resource for researchers and developers to design quality technology that considers the needs of the deaf community. Artificial intelligence requires computation and algorithms capability, but it also requires data.

#artificial intelligence technologies #cognitive computing #deep learning #expert systems #machine learning #trending now #chatbots #robotics

New Dataset for AI-Enabled Sign Language Translation
1.25 GEEK