The increase of the processing capacity, the advances in automatic learning techniques and the amount of data available, have allowed to improve the performance of image recognition systems, allowing the development of projects in areas such as domotics, biology, genetics, augmented reality, security, and the recognition of sign languages.
Many works have been developed to address the problem of translating sign languages, however, despite the efforts, a system that addresses this problem for the Panamanian sign language has not been developed yet.
In this work, a first-person vision translation system for sign language using is designed, setting up a conceptual solution from which a classifier is developed for the translation of static gestures belonging to Panamanian sign language through deep learning techniques.
Two classifiers are trained using convolutional neural networks as a classification algorithm and a set of training data is created and refined with around 55000 images and the participation of 3 users.
Finally, the classifiers are evaluated using a test dataset that has the participation of two users of which one is a new subject for the classifiers.
.........
MISSING FILES!!!