A Learning Scheme for EMG Based Decoding of Dexterous, In-Hand Manipulation Motions

Electromyography (EMG) based interfaces are the most common solutions for the control of robotic, orthotic, prosthetic, assistive, and rehabilitation devices, translating myoelectric activations into meaningful actions. Over the last years, a lot of emphasis has been put into the EMG based decoding of human intention, but very few studies have been carried out focusing on the continuous decoding of human motion. In this work, we present a learning scheme for the EMG based decoding of object motions in dexterous, in-hand manipulation tasks. We also study the contribution of different muscles while performing these tasks and the effect of the gender and hand size in the overall decoding accuracy. To do that, we use EMG signals derived from 16 muscle sites (8 on the hand and 8 on the forearm) from 11 different subjects and an optical motion capture system that records the object motion. The object motion decoding is formulated as a regression problem using the Random Forests methodology. Regarding feature selection, we use the following time-domain features: root mean square, waveform length and zero crossings. A 10-fold cross validation procedure is used for model assessment purposes and the feature variable importance values are calculated for each feature. This study shows that subject specific, hand specific, and object specific decoding models offer better decoding accuracy that the generic models.

More information can be found in the following publication:

Anany Dwivedi, Yongje Kwon, Andrew McDaid, and Minas Liarokapis, “A Learning Scheme for EMG Based Decoding of Dexterous, In-Hand Manipulation Motions,” IEEE Transactions on Neural Systems and Rehabilitation Engineering, 2019 [PDF | BIB]