Recognition of Hand Gestures Based on EMG Signals with Deep and Double-Deep Q-Networks

Loading...
Thumbnail Image
Date
2023-04
Profesor/a Guía
Facultad/escuela
Idioma
en
Journal Title
Journal ISSN
Volume Title
Publisher
MDPI
Nombre de Curso
item.page.dc.rights
Attribution 4.0 International (CC BY 4.0)
item.page.dc.rights
https://creativecommons.org/licenses/by/4.0/
Abstract
In recent years, hand gesture recognition (HGR) technologies that use electromyography (EMG) signals have been of considerable interest in developing human–machine interfaces. Most state-of-the-art HGR approaches are based mainly on supervised machine learning (ML). However, the use of reinforcement learning (RL) techniques to classify EMGs is still a new and open research topic. Methods based on RL have some advantages such as promising classification performance and online learning from the user’s experience. In this work, we propose a user-specific HGR system based on an RL-based agent that learns to characterize EMG signals from five different hand gestures using Deep Q-network (DQN) and Double-Deep Q-Network (Double-DQN) algorithms. Both methods use a feed-forward artificial neural network (ANN) for the representation of the agent policy. We also performed additional tests by adding a long–short-term memory (LSTM) layer to the ANN to analyze and compare its performance. We performed experiments using training, validation, and test sets from our public dataset, EMG-EPN-612. The final accuracy results demonstrate that the best model was DQN without LSTM, obtaining classification and recognition accuracies of up to (Formula presented.) and (Formula presented.), respectively. The results obtained in this work demonstrate that RL methods such as DQN and Double-DQN can obtain promising results for classification and recognition problems based on EMG signals.
item.page.dc.description
Indexación: Scopus.
Keywords
Deep Q-Network, Double-Deep Q-Network, Electromyography, Hand gesture recognition, Reinforcement learning
Citation
Sensors Open Access Volume 23, Issue 8 April 2023 Article number 3905
DOI
10.3390/s23083905
Link a Vimeo